[Health projects managed by Nursing Coordinators: an analysis of contents and degree of success].
Palese, Alvisa; Bresciani, Federica; Brutti, Caterina; Chiari, Ileana; Fontana, Luciana; Fronza, Ornella; Gasperi, Giuseppina; Gheno, Oscar; Guarese, Olga; Leali, Anna; Mansueti, Nadia; Masieri, Enrico; Messina, Laura; Munaretto, Gabriella; Paoli, Claudia; Perusi, Chiara; Randon, Giulia; Rossi, Gloria; Solazzo, Pasquale; Telli, Debora; Trenti, Giuliano; Veronese, Elisabetta; Saiani, Luisa
2012-01-01
To describe the evolution and results of health projects run in hospitals and managed by Nursing Coordinators. A convenience sample of 13 north Italian hospital, and a sample of 56 Nursing Coordinators with a permanent position from at least 1 year, was contacted. The following information was collected with a structured interview: projects run in 2009, topic, if bottom up or top down, number of staff involved and state (ended, still running, stopped). In 2009 Nursing Coordinators started 114 projects (mean 1.8±1.2 each): 94 (82.5%) were improvement projects, 17 (14.9%) accreditation, and 3 (2.6%) research. The projects involved 2.732 staff members (73.7%; average commitment 84 hours); 55 (48.2%) projects were still running, 52 (45.6%) completed, for 5 (4.4%) there was no assessment and 2 (1.8%) had been stopped. Nurses are regularly involved in several projects. A systematic monitoring of the results obtained and stabilization strategies are scarce. Due to the large number of resources invested, a correct management and the choice of areas relevant for patients' problems and needs are pivotal.
Veit, Christof; Bungard, Sven; Hertle, Dagmar; Grothaus, Franz-Josef; Kötting, Joachim; Arnold, Nicolai
2013-01-01
Alongside the projects of internal quality management and mandatory quality assurance there is a variety of quality driven projects across institutions initiated and run by various partners to continuously improve the quality of care. The multiplicity and characteristics of these projects are discussed on the basis of projects run by the BQS Institute between 2010 and 2013. In addition, useful interactions and linking with mandatory quality benchmarking and with internal quality management are discussed. (As supplied by publisher). Copyright © 2013. Published by Elsevier GmbH.
Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.
Susan J. Alexander
1991-01-01
The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...
NONMEMory: a run management tool for NONMEM.
Wilkins, Justin J
2005-06-01
NONMEM is an extremely powerful tool for nonlinear mixed-effect modelling and simulation of pharmacokinetic and pharmacodynamic data. However, it is a console-based application whose output does not lend itself to rapid interpretation or efficient management. NONMEMory has been created to be a comprehensive project manager for NONMEM, providing detailed summary, comparison and overview of the runs comprising a given project, including the display of output data, simple post-run processing, fast diagnostic plots and run output management, complementary to other available modelling aids. Analysis time ought not to be spent on trivial tasks, and NONMEMory's role is to eliminate these as far as possible by increasing the efficiency of the modelling process. NONMEMory is freely available from http://www.uct.ac.za/depts/pha/nonmemory.php.
The personal receiving document management and the realization of email function in OAS
NASA Astrophysics Data System (ADS)
Li, Biqing; Li, Zhao
2017-05-01
This software is an independent software system, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs. This software is an independent software system, using the current popular B/S (browser/server) structure and ASP.NET technology development, using the Windows 7 operating system, Microsoft SQL Server2005 Visual2008 and database as a development platform, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs.
Scientific data bases on a VAX-11/780 running VMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benkovitz, C.M.; Tichler, J.L.
At Brookhaven National Laboratory several current projects are developing and applying data management techniques to compile, analyze and distribute scientific data sets that are the result of various multi institutional experiments and data gathering projects. This paper will present an overview of a few of these data management projects.
Code of Federal Regulations, 2010 CFR
2010-01-01
... project will run for a period of three years. The legislation requires the Office of Personnel Management... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL... Demonstration Project § 890.1301 Purpose. The purpose of this subpart is to implement section 721 of the...
Applying the TOC Project Management to Operation and Maintenance Scheduling of a Research Vessel
NASA Astrophysics Data System (ADS)
Manti, M. Firdausi; Fujimoto, Hideo; Chen, Lian-Yi
Marine research vessels and their systems are major assets in the marine resources development. Since the running costs for the ship are very high, it is necessary to reduce the total cost by an efficient scheduling for operation and maintenance. To reduce project period and make it efficient, we applied TOC project management method that is a project management approach developed by Dr. Eli Goldratt. It challenges traditional approaches to project management. It will become the most important improvement in the project management since the development of PERT and critical path methodologies. As a case study, we presented the marine geology research project for the purpose of operations in addition to repair on the repairing dock projects for maintenance of vessels.
Atmospheric Research 2011 Technical Highlights
NASA Technical Reports Server (NTRS)
2012-01-01
The 2011 Technical Highlights describes the efforts of all members of Atmospheric Research. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report.
A success paradigm for project managers in the aerospace industry
NASA Astrophysics Data System (ADS)
Bauer, Barry Jon
Within the aerospace industry, project managers traditionally have been selected based on their technical competency. While this may lead to brilliant technical solutions to customer requirements, a lack of management ability can result in failed programs that over-run on cost, are late to critical path schedules, fail to fully utilize the diversity of talent available within the program team, and otherwise disappoint key stakeholders. This research study identifies the key competencies that a project manager should possess in order to successfully lead and manage a project in the aerospace industry. The research attempts to show evidence that within the aerospace industry, it is perceived that management competency is more important to project management success than only technical competence.
Laboratory for Atmospheres: 2006 Technical Highlights
NASA Technical Reports Server (NTRS)
Stewart, Richard W.
2007-01-01
The 2006 Technical Highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, are highlighted in this report.
Laboratory for Atmospheres 2009 Technical Highlights
NASA Technical Reports Server (NTRS)
Cote, Charles E.
2010-01-01
The 2009 Technical Highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report.
Laboratory for Atmospheres 2005 Technical Highlights
NASA Technical Reports Server (NTRS)
2006-01-01
The 2005 Technical highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report.
Laboratory for Atmospheres 2007 Technical Highlights
NASA Technical Reports Server (NTRS)
Stewart, Richard W.
2008-01-01
The 2007 Technical Highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report.
Laboratory for Atmospheres 2010 Technical Highlights
NASA Technical Reports Server (NTRS)
2011-01-01
The 2010 Technical Highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report.
MouseNet database: digital management of a large-scale mutagenesis project.
Pargent, W; Heffner, S; Schäble, K F; Soewarto, D; Fuchs, H; Hrabé de Angelis, M
2000-07-01
The Munich ENU Mouse Mutagenesis Screen is a large-scale mutant production, phenotyping, and mapping project. It encompasses two animal breeding facilities and a number of screening groups located in the general area of Munich. A central database is required to manage and process the immense amount of data generated by the mutagenesis project. This database, which we named MouseNet(c), runs on a Sybase platform and will finally store and process all data from the entire project. In addition, the system comprises a portfolio of functions needed to support the workflow management of the core facility and the screening groups. MouseNet(c) will make all of the data available to the participating screening groups, and later to the international scientific community. MouseNet(c) will consist of three major software components:* Animal Management System (AMS)* Sample Tracking System (STS)* Result Documentation System (RDS)MouseNet(c) provides the following major advantages:* being accessible from different client platforms via the Internet* being a full-featured multi-user system (including access restriction and data locking mechanisms)* relying on a professional RDBMS (relational database management system) which runs on a UNIX server platform* supplying workflow functions and a variety of plausibility checks.
30 CFR 203.75 - What risk do I run if I request a redetermination?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 2 2011-07-01 2011-07-01 false What risk do I run if I request a redetermination? 203.75 Section 203.75 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, REGULATION, AND... Expansion Projects § 203.75 What risk do I run if I request a redetermination? If you request a...
NASA Technical Reports Server (NTRS)
Smith, Claire
2003-01-01
As an activist for the project community at NASA's Ames Research Center, I see my role as finding out what resource project managers need to help them run successful projects. I need to go out and advocate for those resources, if necessary, and I need to be supportive, not just for the projects, but also for the project managers and their teams. It's not something I can do sitting at my desk waiting for the phone to ring. To me, that's what an activist does. It's public service. In this case my public is the project-practitioner community.
New Project System for Undergraduate Electronic Engineering
ERIC Educational Resources Information Center
Chiu, Dirk M.; Chiu, Shen Y.
2005-01-01
A new approach to projects for undergraduate electronic engineering in an Australian university has been applied successfully for over 10 years. This approach has a number of projects running over three year period. Feedback from past graduates and their managers has confirmed that these projects train the students well, giving them the ability…
Advance Planning Briefing for Industry: Information Dominance for the Full Spectrum Force.
1997-05-29
Electronic Order Processing is projected. The procurement will be a FFP ID/IQ award. BRIEFER: LTC Mary Fuller, Product Manager, Army Small Computer Program...will be a Best Value evaluation with a minimum of 2 awards. The procurement is planned to run for 2 years for ordering, Electronic Order Processing is...procurement is planned to run for three years for ordering, Electronic Order Processing is projected. The procurement will be a FFP ID/IQ award. BRIEFER
Simulation-Based Learning: The Learning-Forgetting-Relearning Process and Impact of Learning History
ERIC Educational Resources Information Center
Davidovitch, Lior; Parush, Avi; Shtub, Avy
2008-01-01
The results of empirical experiments evaluating the effectiveness and efficiency of the learning-forgetting-relearning process in a dynamic project management simulation environment are reported. Sixty-six graduate engineering students performed repetitive simulation-runs with a break period of several weeks between the runs. The students used a…
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Duration. 890.1302 Section 890.1302 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL... Demonstration Project § 890.1302 Duration. The demonstration project will run from January 1, 2000, through...
78 FR 16502 - Proposed Agency Information Collection Activities; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-15
...: Board of Governors of the Federal Reserve System. SUMMARY: On June 15, 1984, the Office of Management... OMB Desk Officer, Shagufta Ahmed, Office of Information and Regulatory Affairs, Office of Management... Collection Report title: Annual Company-Run Stress Test Projections. Agency form number: FR Y-16. OMB control...
ISO 55000: Creating an asset management system.
Bradley, Chris; Main, Kevin
2015-02-01
In the October 2014 issue of HEJ, Keith Hamer, group vice-president, Asset Management & Engineering at Sodexo, and marketing director at Asset Wisdom, Kevin Main, argued that the new ISO 55000 standards present facilities managers with an opportunity to create 'a joined-up, whole lifecycle approach' to managing and delivering value from assets. In this article, Kevin Main and Chris Bradley, who runs various asset management projects, examine the process of creating an asset management system.
NASA Technical Reports Server (NTRS)
Muniz, R.; Hochstadt, J.; Boelke J.; Dalton, A.
2011-01-01
The Content Documents are created and managed under the System Software group with. Launch Control System (LCS) project. The System Software product group is lead by NASA Engineering Control and Data Systems branch (NEC3) at Kennedy Space Center. The team is working on creating Operating System Images (OSI) for different platforms (i.e. AIX, Linux, Solaris and Windows). Before the OSI can be created, the team must create a Content Document which provides the information of a workstation or server, with the list of all the software that is to be installed on it and also the set where the hardware belongs. This can be for example in the LDS, the ADS or the FR-l. The objective of this project is to create a User Interface Web application that can manage the information of the Content Documents, with all the correct validations and filters for administrator purposes. For this project we used one of the most excellent tools in agile development applications called Ruby on Rails. This tool helps pragmatic programmers develop Web applications with Rails framework and Ruby programming language. It is very amazing to see how a student can learn about OOP features with the Ruby language, manage the user interface with HTML and CSS, create associations and queries with gems, manage databases and run a server with MYSQL, run shell commands with command prompt and create Web frameworks with Rails. All of this in a real world project and in just fifteen weeks!
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.
2012-12-01
At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.
Locatelli, Paolo; Montefusco, Vittorio; Sini, Elena; Restifo, Nicola; Facchini, Roberta; Torresani, Michele
2013-01-01
The volume and the complexity of clinical and administrative information make Information and Communication Technologies (ICTs) essential for running and innovating healthcare. This paper tells about a project aimed to design, develop and implement a set of organizational models, acknowledged procedures and ICT tools (Mobile & Wireless solutions and Automatic Identification and Data Capture technologies) to improve actual support, safety, reliability and traceability of a specific therapy management (stem cells). The value of the project is to design a solution based on mobile and identification technology in tight collaboration with physicians and actors involved in the process to ensure usability and effectivenes in process management.
Design of an AdvancedTCA board management controller (IPMC)
NASA Astrophysics Data System (ADS)
Mendez, J.; Bobillier, V.; Haas, S.; Joos, M.; Mico, S.; Vasey, F.
2017-03-01
The AdvancedTCA (ATCA) standard has been selected as the hardware platform for the upgrade of the back-end electronics of the CMS and ATLAS experiments of the Large Hadron Collider (LHC) . In this context, the electronic systems for experiments group at CERN is running a project to evaluate, specify, design and support xTCA equipment. As part of this project, an Intelligent Platform Management Controller (IPMC) for ATCA blades, based on a commercial solution, has been designed to be used on existing and future ATCA blades. This paper reports on the status of this project presenting the hardware and software developments.
Making sausage--effective management of enterprise-wide clinical IT projects.
Smaltz, Detlev H; Callander, Rhonda; Turner, Melanie; Kennamer, Gretchen; Wurtz, Heidi; Bowen, Alan; Waldrum, Mike R
2005-01-01
Unlike most other industries in which company employees are, well, company employees, U.S. hospitals are typically run by both employees (nurses, technicians, and administrative staff) and independent entrepreneurs (physicians and nurse practitioners). Therefore, major enterprise-wide clinical IT projects can never simply be implemented by mandate. Project management processes in these environments must rely on methods that influence adoption rather than presume adoption will occur. "Build it and they will come" does not work in a hospital setting. This paper outlines a large academic medical center's experiences in managing an enterprise-wide project to replace its core clinical systems functionality. Best practices include developing a cogent optimal future-state vision, communications planning and execution, vendor validation against the optimal future-state vision, and benefits realization assessment.
Paddys Run Streambank Stabilization Project at the Fernald Preserve, Harrison, OH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hooten, Gwendolyn; Hertel, Bill; Homer, John
The Fernald Preserve is a former uranium-processing plant that underwent extensive remediation pursuant to CERCLA and is now managed by the US DOE Office of Legacy Management. While remediation of buildings and soil contamination was completed in 2006, aquifer remediation is ongoing. Paddys Run is a second-order stream that runs to the south along the western side of the Fernald Preserve. The Paddys Run watershed encompasses nearly 41 km2 (16 mi2), including most of the Fernald site. Field personnel conducting routine site inspections in March 2014 observed that Paddys Run was migrating east via bank erosion into the “Pit 3more » Swale,” an area of known surface-water contamination. The soil there was certified pursuant to site regulatory agreements and meets all final remediation levels. However, weekly surface-water monitoring is conducted from two puddles within the swale area, when water that exceeds the final remediation levels is present. Paddys Run had migrated east approximately 4 m (13 ft) in 2 years and was approximately 29 m (95 ft) from the sample location. This rapid migration threatened existing conditions that allowed for continued monitoring of the swale area and also threatened Paddys Run water quality. Therefore, DOE and regulators determined that the east bank of Paddys Run required stabilization. This was accomplished with a design that included the following components: relocation of approximately 145 m (475 ft) of streambed 9 m (30 ft) west, installation of a rock toe along the east bank, installation of two cross-vane in-stream grade-control structures, stabilization of a portion of the east bank using soil encapsulated lifts, and regrading, seeding, and planting within remaining disturbed areas. In an effort to take advantage of low-flow conditions in Paddys Run, construction was initiated in September 2014. Weather delays and subsurface flow within the Paddys Run streambed resulted in an interim shutdown of the project area in December 2014. Construction activities resumed in April 2015, with completion in November 2015. To date, this stabilization project has been successful. The regraded bank and streambed have remained stable, and no compromise to installed cross-vanes, the rock toe, or the soil encapsulated lifts has been observed.« less
Kinematic and Kinetic Evaluation of High Speed Backward Running
1999-06-30
Designed using Perform Pro , WHS/DIOR, Oct 94 KINEMATIC AND KINETIC EVALUATION OF HIGH SPEED BACKWARD RUNNING by ALAN WAYNE ARATA A DISSERTATION...Project Manager, Engineering Division, Kelly Air Force Base, Texas, 1983-86 AWARDS AND HONORS: All-American, 50yd Freestyle , 1979 Winner, Rocky...redirection #include <stdlib.h> // for exit #include <iomanip.h> // for set precision #include <string.h> // for string copy const int NUMPOINTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doucet, Mathieu; Hobson, Tanner C.; Ferraz Leal, Ricardo Miguel
The Django Remote Submission (DRS) is a Django (Django, n.d.) application to manage long running job submission, including starting the job, saving logs, and storing results. It is an independent project available as a standalone pypi package (PyPi, n.d.). It can be easily integrated in any Django project. The source code is freely available as a GitHub repository (django-remote-submission, n.d.). To run the jobs in background, DRS takes advantage of Celery (Celery, n.d.), a powerful asynchronous job queue used for running tasks in the background, and the Redis Server (Redis, n.d.), an in-memory data structure store. Celery uses brokers tomore » pass messages between a Django Project and the Celery workers. Redis is the message broker of DRS. In addition DRS provides real time monitoring of the progress of Jobs and associated logs. Through the Django Channels project (Channels, n.d.), and the usage of Web Sockets, it is possible to asynchronously display the Job Status and the live Job output (standard output and standard error) on a web page.« less
Doucet, Mathieu; Hobson, Tanner C.; Ferraz Leal, Ricardo Miguel
2017-08-01
The Django Remote Submission (DRS) is a Django (Django, n.d.) application to manage long running job submission, including starting the job, saving logs, and storing results. It is an independent project available as a standalone pypi package (PyPi, n.d.). It can be easily integrated in any Django project. The source code is freely available as a GitHub repository (django-remote-submission, n.d.). To run the jobs in background, DRS takes advantage of Celery (Celery, n.d.), a powerful asynchronous job queue used for running tasks in the background, and the Redis Server (Redis, n.d.), an in-memory data structure store. Celery uses brokers tomore » pass messages between a Django Project and the Celery workers. Redis is the message broker of DRS. In addition DRS provides real time monitoring of the progress of Jobs and associated logs. Through the Django Channels project (Channels, n.d.), and the usage of Web Sockets, it is possible to asynchronously display the Job Status and the live Job output (standard output and standard error) on a web page.« less
QUEST for Quality for Students: A Student Quality Concept. Volume 3
ERIC Educational Resources Information Center
Galán Palomares, Fernando Miguel; Todorovski, Blazhe; Kažoka, Asnate; Saarela, Henni
2013-01-01
This is the final publication of the QUEST for Quality for Students (QUEST) project, run by the European Students' Union. The QUEST project has managed to analyse students' views on the quality of higher education to identify areas in which students can become increasingly involved in quality assurance and enhancement processes. This publication…
JPL's Approach for Helping Flight Project Managers Meet Today's Management Challenges
NASA Technical Reports Server (NTRS)
Leising, Charles J.
2004-01-01
All across NASA project managers are facing tough new challenges. NASA has imposed increased oversight and the number of projects at Centers such as JPL has exploded from a handful of large projects to a much greater number of smaller ones. Experienced personnel are retiring at increasing rates and younger, less experienced managers are being rapidly promoted up the ladder. Budgets are capped, competition among NASA Centers and Federally Funded Research and Development Centers (FFRDCs) has increased significantly and there is no longer any tolerance to cost overruns. On top of all this, implementation schedules have been reduced by 25 to 50% to reduce run-out costs, making it even more difficult to define requirements, validate heritage assumptions and make accurate cost estimates during the early phases of the life-cycle.JPL's executive management, under the leadership of the Associate Director for Flight Projects and Mission Success, have attempted to meet these challenges by improving operations in five areas: (1) increased standardization, where it is judged to have significant benefit; (2) better balance and more effective partnering between projects and the line management; (3) increased infrastructure support; (4) improved management training; and (5) more effective review and oversight.
Managing Information On Technical Requirements
NASA Technical Reports Server (NTRS)
Mauldin, Lemuel E., III; Hammond, Dana P.
1993-01-01
Technical Requirements Analysis and Control Systems/Initial Operating Capability (TRACS/IOC) computer program provides supplemental software tools for analysis, control, and interchange of project requirements so qualified project members have access to pertinent project information, even if in different locations. Enables users to analyze and control requirements, serves as focal point for project requirements, and integrates system supporting efficient and consistent operations. TRACS/IOC is HyperCard stack for use on Macintosh computers running HyperCard 1.2 or later and Oracle 1.2 or later.
Financial Literacy and Family Learning in Children's Centres. Financial Literacy in Context
ERIC Educational Resources Information Center
Basic Skills Agency, 2007
2007-01-01
"Pots of Gold" is a family finance research project delivered within Sure Start Children's Centre areas across Newcastle. It is funded by the DfES through the Basic Skills Agency and managed by Newcastle Family Learning Service. This project has been delivered in two phases, running from October 2005 to December 2006. Phase 1 ran from…
Update on Service Management Project
None
2018-05-11
GS and IT Service Management project status meeting - Distribution: Sigurd Lettow, Frederic Hemmer, Thomas Pettersson, David Foster, Matti Tiirakari, GS&IT; Service Providers When and where: Thursday 2nd September at 10:00-11:30 in IFiltration Plant (222-R-001) Dear All, We would like to inform you about progress made on different topics like the Service Catalogue, the new Service Management Tool and the Service Desk. We would also like to present the plan for when we hope to go live and what this will mean for all of you running and providing services today. We will need your active support and help in the coming months to make this happen. GS&IT; Service Management Teams Reinoud Martens, Mats Moller
The Impact of NPG 7120.5A Upon Training and Development
NASA Technical Reports Server (NTRS)
Hoffman, Edward J.
1998-01-01
NASA Procedures and Guidance 7120.5A for Program and Project Management Processes and Requirements should have minimal effect upon current Agency training and development programs - mainly because the new directive simply formalizes what we have been teaching and learning in the NASA Program/Project Management Initiative all along. A frequent complaint we get from the 8,000 or so graduates of our PPMI courses over the years, however, deals with resistance to what they may have learned in the classroom or training site. Brimming with new ideas, these young men and women often run up against an entrenched program or project manager who insists that things be done "the old way," too often perceived as "the NASA way" or even "the Goddard way." Management was all too often in the eyes of the manager; now we're all reading from the same book, 7120.5A, Still, there is no single method or "one size fits all" approach to project management in NASA. While each Center is responsible for developing policies, processes and procedures to comply with the new NPG, individual program and project managers will still need to tailor their requirements to the specific needs of the project, consistent with the size, complexity, risk and criticality of the project. Under NPG 7120.5A, the results of such tailoring are to be documented in agreements among managers, directors, Enterprise Associate Administrators and the Administrator.
Database on Demand: insight how to build your own DBaaS
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio
2015-12-01
At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.
Will Empowerment of USAF Program Managers Mitigate the Acquisitions Crisis
2016-06-10
FAR Federal Acquisition Regulations GAO Government Accountability Office MDAP Major Defense Acquisition Program USAF United States Air Force ix...actually run the project. The Government Accountability Office (GAO),2 along with many other organizations, including Congress in their 2016 National...1 Government Accountability Office (GAO), GAO-06-110, Best Practices: Better Support of Weapons Systems Program Managers Needed to
ERIC Educational Resources Information Center
Alnuaimi, Qussay A. B.
2015-01-01
We present Aviation Cost Risk management (CRM) methodology designed for Airlines Company, who needs to run projects beyond their normal. These airlines are critical to the survival of these organizations, such as the development and performance. The Aviation crisis can have considerable impact upon the value of the firm. Risk managers must focus…
An Open IMS-Based User Modelling Approach for Developing Adaptive Learning Management Systems
ERIC Educational Resources Information Center
Boticario, Jesus G.; Santos, Olga C.
2007-01-01
Adaptive LMS have not yet reached the eLearning marketplace due to methodological, technological and management open issues. At aDeNu group, we have been working on two key challenges for the last five years in related research projects. Firstly, develop the general framework and a running architecture to support the adaptive life cycle (i.e.,…
ADAMS: AIRLAB data management system user's guide
NASA Technical Reports Server (NTRS)
Conrad, C. L.; Ingogly, W. F.; Lauterbach, L. A.
1986-01-01
The AIRLAB Data Management System (ADAMS) is an online environment that supports research at NASA's AIRLAB. ADAMS provides an easy to use interactive interface that eases the task of documenting and managing information about experiments and improves communication among project members. Data managed by ADAMS includes information about experiments, data sets produced, software and hardware available in AIRLAB as well as that used in a particular experiment, and an on-line engineer's notebook. The User's Guide provides an overview of the ADAMS system as well as details of the operations available within ADAMS. A tutorial section takes the user step-by-step through a typical ADAMS session. ADAMS runs under the VAX/VMS operating system and uses the ORACLE database management system and DEC/FMS (the Forms Management System). ADAMS can be run from any VAX connected via DECnet to the ORACLE host VAX. The ADAMS system is designed for simplicity, so interactions within the underlying data management system and communications network are hidden from the user.
A Computing Infrastructure for Supporting Climate Studies
NASA Astrophysics Data System (ADS)
Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team
2011-12-01
Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.
1980-05-02
642 .. . . . OBJECTIVES S PROJECT In April 1978, a Mission Assurance Conference was sponsored jointly by the Air Force Space...equipment or services from a corporation. 4 The corporation in turn probably consists controls manager, and project engineers? of thousands of people and a...once the project effective? Were we motivating them by is assigned, it is run in the military rewarding them in some manner for good department by a
Security-aware Virtual Machine Allocation in the Cloud: A Game Theoretic Approach
2015-01-13
predecessor, however, this paper used empirical evidence and actual data from running experiments on the Amazon EC2 cloud . They began by running all 5...is through effective VM allocation management of the cloud provider to ensure delivery of maximum security for all cloud users. The negative... Cloud : A Game Theoretic Approach 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f
Composite Crew Module: Primary Structure
NASA Technical Reports Server (NTRS)
Kirsch, Michael T.
2011-01-01
In January 2007, the NASA Administrator and Associate Administrator for the Exploration Systems Mission Directorate chartered the NASA Engineering and Safety Center to design, build, and test a full-scale crew module primary structure, using carbon fiber reinforced epoxy based composite materials. The overall goal of the Composite Crew Module project was to develop a team from the NASA family with hands-on experience in composite design, manufacturing, and testing in anticipation of future space exploration systems being made of composite materials. The CCM project was planned to run concurrently with the Orion project's baseline metallic design within the Constellation Program so that features could be compared and discussed without inducing risk to the overall Program. This report discusses the project management aspects of the project including team organization, decision making, independent technical reviews, and cost and schedule management approach.
dV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron
This report introduces publications that report the results of a project that aimed to design a computational framework that enables computational experimentation at scale while supporting the model of “submit locally, compute globally”. The project focuses on estimating application resource needs, finding the appropriate computing resources, acquiring those resources,deploying the applications and data on the resources, managing applications and resources during run.
NASA Technical Reports Server (NTRS)
Howell, Gregory A.
2005-01-01
Commitments are between people, not schedules. Project management as practiced today creates a "commitment-free zone," because it assumes that people will commit to centrally managed schedules without providing a mechanism to ensure their work can be done. So they give it their best, but something always seems to come up ..."I tried, but you know how it is." This form of project management does not provide a mechanism to ensure that what should be done, can in fact be done at the required moment. Too often, promises reliable promise. made in coordination meetings are conditional and unreliable. It has been my experience that at times trust can be low and hard to build in this environment. The absence of reliable promises explains why on well-run projects, people are often only completing 30-50 percent of the deliverables they d promised for the week. We all know what a promise is; we have plenty of experience making them and receiving them from others. So what s the problem? The sad fact is that the project environment-like many other work environments- is often so filled with systemic dishonesty, that we don t expect promises that are reliable. Project managers excel when they manage their projects as networks of commitments and help their people learn to elicit and make reliable promises.
NASA Astrophysics Data System (ADS)
Wang, Wanshun; Chen, Zhuo; Li, Xiuwen
2018-03-01
The safety monitoring is very important in the operation and management of water resources and hydropower projects. It is the important means to understand the dam running status, to ensure the dam safety, to safeguard people’s life and property security, and to make full use of engineering benefits. This paper introduces the arrangement of engineering safety monitoring system based on the example of a water resource control project. The monitoring results of each monitoring project are analyzed intensively to show the operating status of the monitoring system and to provide useful reference for similar projects.
FARM WASTE TO ENERGY: A SUSTAINABLE SOLUTION FOR SMALL-SCALE FARMS
Through this project, it was determined that the utilization of anaerobic digestion on small farms for the purpose of managing varying waste streams and odor issues and decreasing fossil fuel consumption and cost was feasible. The feedstocks necessary to run the digester are r...
Anne E. Black; Peter Landres
2011-01-01
Current fire policy to restore ecosystem function and resiliency and reduce buildup of hazardous fuels implies a larger future role for fire (both natural and human ignitions) (USDA and USDOI 2000). Yet some fire management (such as building fire line, spike camps, or heli-spots) potentially causes both short- and long-term impacts to forest health. In the short run,...
NASA Astrophysics Data System (ADS)
Crootof, A.
2017-12-01
Understanding coupled human-water dynamics offers valuable insights to address fundamental water resources challenges posed by environmental change. With hydropower reshaping human-water interactions in mountain river basins, there is a need for a socio-hydrology framework—which examines two-way feedback loops between human and water systems—to more effectively manage water resources. This paper explores the cross-scalar interactions and feedback loops between human and water systems in river basins affected by run-of-the-river hydropower and highlights the utility of a socio-hydrology perspectives to enhance water management in the face of environmental change. In the Himalayas, the rapid expansion of run-of-the-river hydropower—which diverts streamflow for energy generation—is reconfiguring the availability, location, and timing of water resources. This technological intervention in the river basin not only alters hydrologic dyanmics but also shapes social outcomes. Using hydropower development in the highlands of Uttarakhand, India as a case study, I first illustrate how run-of-the-river projects transform human-water dynamics by reshaping the social and physical landscape of a river basin. Second, I emphasize how examining cross-scalar feedbacks among structural dynamics, social outcomes, and values and norms in this coupled human-water system can inform water management. Third, I present hydrological and social literature, raised separately, to indicate collaborative research needs and knowledge gaps for coupled human-water systems affected by run-of-the-river hydropower. The results underscore the need to understand coupled human-water dynamics to improve water resources management in the face of environmental change.
From Static to Dynamic: Choosing and Implementing a Web-Based CMS
ERIC Educational Resources Information Center
Kneale, Ruth
2008-01-01
Working as systems librarian for the Advanced Technology Solar Telescope (ATST), a project for the National Solar Observatory (NSO) based in Tucson, Arizona, a large part of the author's responsibilities involve running the web site. She began looking into content management systems (CMSs), specifically ones for website control. A CMS is generally…
Eiler, John H.; Masuda, Michele; Spencer, Ted R.; Driscoll, Richard J.; Schreck, Carl B.
2014-01-01
Chinook Salmon Oncorhynchus tshawytscha returns to the Yukon River basin have declined dramatically since the late 1990s, and detailed information on the spawning distribution, stock structure, and stock timing is needed to better manage the run and facilitate conservation efforts. A total of 2,860 fish were radio-tagged in the lower basin during 2002–2004 and tracked upriver. Fish traveled to spawning areas throughout the basin, ranging from several hundred to over 3,000 km from the tagging site. Similar distribution patterns were observed across years, suggesting that the major components of the run were identified. Daily and seasonal composition estimates were calculated for the component stocks. The run was dominated by two regional components comprising over 70% of the return. Substantially fewer fish returned to other areas, ranging from 2% to 9% of the return, but their collective contribution was appreciable. Most regional components consisted of several principal stocks and a number of small, spatially isolated populations. Regional and stock composition estimates were similar across years even though differences in run abundance were reported, suggesting that the differences in abundance were not related to regional or stock-specific variability. Run timing was relatively compressed compared with that in rivers in the southern portion of the species’ range. Most stocks passed through the lower river over a 6-week period, ranging in duration from 16 to 38 d. Run timing was similar for middle- and upper-basin stocks, limiting the use of timing information for management. The lower-basin stocks were primarily later-run fish. Although differences were observed, there was general agreement between our composition and timing estimates and those from other assessment projects within the basin, suggesting that the telemetry-based estimates provided a plausible approximation of the return. However, the short duration of the run, complex stock structure, and similar stock timing complicate management of Yukon River returns.
CBEO:N, Chesapeake Bay Environmental Observatory as a Cyberinfrastructure Node
NASA Astrophysics Data System (ADS)
Zaslavsky, I.; Piasecki, M.; Whitenack, T.; Ball, W. P.; Murphy, R.
2008-12-01
Chesapeake Bay Environmental Observatory (CBEO) is an NSF-supported project focused on studying hypoxia in Chesapeake Bay using advanced cyberinfrastructure (CI) technologies. The project is organized around four concurrent and interacting activities: 1) CBEO:S provides science and management context for the use of CI technologies, focusing on hypoxia and its non-linear dynamics as affected by management and climate; 2) CBEO:T constructs a locally-accessible CBEO test bed prototype centered on spatio-temporal interpolation and advanced querying of model runs; 3) CBEO:N incorporates the test bed CI into national environmental observation networks, and 4) CBEO:E develops education and outreach components of the project that translate observational science for public consumption. CBEO:N activities, which are the focus of this paper, are four-fold: - constructing an online project portal to enable researchers to publish, discover, query, visualize and integrate project-related datasets of different types. The portal is based on the technologies developed within the GEON (the Geosciences Network) project, and has established the CBEO project data server as part of the GEON network of servers; * developing a CBEO node within the WATERS network, taking advantage of the CUAHSI Hydrologic Information System (HIS) Server technology that supports online publication of observation data as web services, and ontology-assisted data discovery; *developing new data structures and metadata in order to describe water quality observational data, and model run output, obtained for the Chesapeake Bay area, using data structures adopted and modified from the Observations Data Model of CUAHSI HIS; * prototyping CBEO tools that can be re-used through the portal, in particular implementing a portal version of R-based spatial interpolation tools. The paper describes recent accomplishments in these four development areas, and demonstrates how CI approaches transform research and data sharing in environmental observing systems.
Self-service for software development projects and HPC activities
NASA Astrophysics Data System (ADS)
Husejko, M.; Høimyr, N.; Gonzalez, A.; Koloventzos, G.; Asbury, D.; Trzcinska, A.; Agtzidis, I.; Botrel, G.; Otto, J.
2014-05-01
This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moeckel, D.R.
A practical, objective guide for ranking projects based on risk-based priorities has been developed by Sun Pipe Line Co. The deliberately simple system guides decisions on how to allocate scarce company resources because all managers employ the same criteria in weighing potential risks to the company versus benefits. Managers at all levels are continuously having to comply with an ever growing amount of legislative and regulatory requirements while at the same time trying to run their businesses effectively. The system primarily is designed for use as a compliance oversight and tracking process to document, categorize, and follow-up on work concerningmore » various issues or projects. That is, the system consists of an electronic database which is updated periodically, and is used by various levels of management to monitor progress of health, safety, environmental and compliance-related projects. Criteria used in determining a risk factor and assigning a priority also have been adapted and found useful for evaluating other types of projects. The process enables management to better define potential risks and/or loss of benefits that are being accepted when a project is rejected from an immediate work plan or budget. In times of financial austerity, it is extremely important that the right decisions are made at the right time.« less
Global Change adaptation in water resources management: the Water Change project.
Pouget, Laurent; Escaler, Isabel; Guiu, Roger; Mc Ennis, Suzy; Versini, Pierre-Antoine
2012-12-01
In recent years, water resources management has been facing new challenges due to increasing changes and their associated uncertainties, such as changes in climate, water demand or land use, which can be grouped under the term Global Change. The Water Change project (LIFE+ funding) developed a methodology and a tool to assess the Global Change impacts on water resources, thus helping river basin agencies and water companies in their long term planning and in the definition of adaptation measures. The main result of the project was the creation of a step by step methodology to assess Global Change impacts and define strategies of adaptation. This methodology was tested in the Llobregat river basin (Spain) with the objective of being applicable to any water system. It includes several steps such as setting-up the problem with a DPSIR framework, developing Global Change scenarios, running river basin models and performing a cost-benefit analysis to define optimal strategies of adaptation. This methodology was supported by the creation of a flexible modelling system, which can link a wide range of models, such as hydrological, water quality, and water management models. The tool allows users to integrate their own models to the system, which can then exchange information among them automatically. This enables to simulate the interactions among multiple components of the water cycle, and run quickly a large number of Global Change scenarios. The outcomes of this project make possible to define and test different sets of adaptation measures for the basin that can be further evaluated through cost-benefit analysis. The integration of the results contributes to an efficient decision-making on how to adapt to Global Change impacts. Copyright © 2012 Elsevier B.V. All rights reserved.
The AIST Managed Cloud Environment
NASA Astrophysics Data System (ADS)
Cook, S.
2016-12-01
ESTO is currently in the process of developing and implementing the AIST Managed Cloud Environment (AMCE) to offer cloud computing services to ESTO-funded PIs to conduct their project research. AIST will provide projects access to a cloud computing framework that incorporates NASA security, technical, and financial standards, on which project can freely store, run, and process data. Currently, many projects led by research groups outside of NASA do not have the awareness of requirements or the resources to implement NASA standards into their research, which limits the likelihood of infusing the work into NASA applications. Offering this environment to PIs will allow them to conduct their project research using the many benefits of cloud computing. In addition to the well-known cost and time savings that it allows, it also provides scalability and flexibility. The AMCE will facilitate infusion and end user access by ensuring standardization and security. This approach will ultimately benefit ESTO, the science community, and the research, allowing the technology developments to have quicker and broader applications.
The AMCE (AIST Managed Cloud Environment)
NASA Astrophysics Data System (ADS)
Cook, S.
2017-12-01
ESTO has developed and implemented the AIST Managed Cloud Environment (AMCE) to offer cloud computing services to SMD-funded PIs to conduct their project research. AIST will provide projects access to a cloud computing framework that incorporates NASA security, technical, and financial standards, on which project can freely store, run, and process data. Currently, many projects led by research groups outside of NASA do not have the awareness of requirements or the resources to implement NASA standards into their research, which limits the likelihood of infusing the work into NASA applications. Offering this environment to PIs allows them to conduct their project research using the many benefits of cloud computing. In addition to the well-known cost and time savings that it allows, it also provides scalability and flexibility. The AMCE facilitates infusion and end user access by ensuring standardization and security. This approach will ultimately benefit ESTO, the science community, and the research, allowing the technology developments to have quicker and broader applications.
Leadership Class Configuration Interaction Code - Status and Opportunities
NASA Astrophysics Data System (ADS)
Vary, James
2011-10-01
With support from SciDAC-UNEDF (www.unedf.org) nuclear theorists have developed and are continuously improving a Leadership Class Configuration Interaction Code (LCCI) for forefront nuclear structure calculations. The aim of this project is to make state-of-the-art nuclear structure tools available to the entire community of researchers including graduate students. The project includes codes such as NuShellX, MFDn and BIGSTICK that run a range of computers from laptops to leadership class supercomputers. Codes, scripts, test cases and documentation have been assembled, are under continuous development and are scheduled for release to the entire research community in November 2011. A covering script that accesses the appropriate code and supporting files is under development. In addition, a Data Base Management System (DBMS) that records key information from large production runs and archived results of those runs has been developed (http://nuclear.physics.iastate.edu/info/) and will be released. Following an outline of the project, the code structure, capabilities, the DBMS and current efforts, I will suggest a path forward that would benefit greatly from a significant partnership between researchers who use the codes, code developers and the National Nuclear Data efforts. This research is supported in part by DOE under grant DE-FG02-87ER40371 and grant DE-FC02-09ER41582 (SciDAC-UNEDF).
What "Exactly" Do You Want Me to Do? Analysis of a Criterion Referenced Assessment Project
ERIC Educational Resources Information Center
Jewels, Tony; Ford, Marilyn; Jones, Wendy
2007-01-01
In tertiary institutions in Australia, and no doubt elsewhere, there is increasing pressure for accountability. No longer are academics assumed "a priori" to be responsible and capable of self management in teaching and assessing the subjects they run. Procedures are being dictated more from the "top down". Although academics…
Learning the ABCs (of Project Management)
NASA Technical Reports Server (NTRS)
Frandsen, Allan
2003-01-01
To lead a project effectively, one has to establish and maintain the flexibility to take appropriate actions when needed. Overconstrained situations should be avoided. To get on top of matters and stay there, a manager needs to anticipate what it will take to successfully complete the job. Physical and financial resources, personnel, and management structure are all important considerations. Carving out the necessary turf up front can make a world of difference to the project's outcome. After the "what," "where," and "when" of a project are nailed down, the next question is "how" to do the job. When I first interviewed for the job of Science Payload Manager on the Advanced Composition (ACE) Explorer mission, Dr. Edward Stone (ACE Principal Investigator) asked, "Al, give me an idea of your management style." It was a question I had not considered before. I thought about it for a few seconds and then answered, "Well, the first descriptive term that comes to mind is the word "tranquility". That seemed to startle him. So I added, "I guess what I mean is, that if the situation is tranquil and the project is running smoothly, then I've anticipated all the problems and taken necessary actions to head them off." He then asked: "Have you ever reached this state?" "No," I admitted, "but I strive for it." That seemed to satisfy him because I got the job.
77 FR 51993 - Western Technical College; Notice of Availability of Environmental Assessment
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-28
... hydroelectric generation at the dam. The dam is operated manually in a run-of-river mode (i.e., an operating...) distribution line; and (5) appurtenant facilities. The project would be operated in a run-of-river mode using... could otherwise enter project waters or adjacent non-project lands; Operating the project in a run-of...
2014-05-21
CAPE CANAVERAL, Fla. – From left, Chirold Epp, the Autonomous Landing and Hazard Avoidance Technology, or ALHAT, project manager, and Jon Olansen, Morpheus project manager, speak to members of the media near the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. Media also viewed Morpheus inside a facility near the landing facility. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Frankie Martin
Multiple system modelling of waste management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eriksson, Ola, E-mail: ola.eriksson@hig.se; Department of Building, Energy and Environmental Engineering, University of Gaevle, SE 801 76 Gaevle; Bisaillon, Mattias, E-mail: mattias.bisaillon@profu.se
2011-12-15
Highlights: > Linking of models will provide a more complete, correct and credible picture of the systems. > The linking procedure is easy to perform and also leads to activation of project partners. > The simulation procedure is a bit more complicated and calls for the ability to run both models. - Abstract: Due to increased environmental awareness, planning and performance of waste management has become more and more complex. Therefore waste management has early been subject to different types of modelling. Another field with long experience of modelling and systems perspective is energy systems. The two modelling traditions havemore » developed side by side, but so far there are very few attempts to combine them. Waste management systems can be linked together with energy systems through incineration plants. The models for waste management can be modelled on a quite detailed level whereas surrounding systems are modelled in a more simplistic way. This is a problem, as previous studies have shown that assumptions on the surrounding system often tend to be important for the conclusions. In this paper it is shown how two models, one for the district heating system (MARTES) and another one for the waste management system (ORWARE), can be linked together. The strengths and weaknesses with model linking are discussed when compared to simplistic assumptions on effects in the energy and waste management systems. It is concluded that the linking of models will provide a more complete, correct and credible picture of the consequences of different simultaneous changes in the systems. The linking procedure is easy to perform and also leads to activation of project partners. However, the simulation procedure is a bit more complicated and calls for the ability to run both models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This technical note describes the current capabilities and availability of the Automated Dredging and Disposal Alternatives Management System (ADDAMS). The technical note replaces the earlier Technical Note EEDP-06-12, which should be discarded. Planning, design, and management of dredging and dredged material disposal projects often require complex or tedious calculations or involve complex decision-making criteria. In addition, the evaluations often must be done for several disposal alternatives or disposal sites. ADDAMS is a personal computer (PC)-based system developed to assist in making such evaluations in a timely manner. ADDAMS contains a collection of computer programs (applications) designed to assist in managingmore » dredging projects. This technical note describes the system, currently available applications, mechanisms for acquiring and running the system, and provisions for revision and expansion.« less
Rich Support for Heterogeneous Polar Data in RAMADDA
NASA Astrophysics Data System (ADS)
McWhirter, J.; Crosby, C. J.; Griffith, P. C.; Khalsa, S.; Lazzara, M. A.; Weber, W. J.
2013-12-01
Difficult to navigate environments, tenuous logistics, strange forms, deeply rooted cultures - these are all experiences shared by Polar scientist in the field as well as the developers of the underlying data management systems back in the office. Among the key data management challenges that Polar investigations present are the heterogeneity and complexity of data that are generated. Polar regions are intensely studied across many science domains through a variety of techniques - satellite and aircraft remote sensing, in-situ observation networks, modeling, sociological investigations, and extensive PI-driven field project data collection. While many data management efforts focus on large homogeneous collections of data targeting specific science domains (e.g., satellite, GPS, modeling), multi-disciplinary efforts that focus on Polar data need to be able to address a wide range of data formats, science domains and user communities. There is growing use of the RAMADDA (Repository for Archiving, Managing and Accessing Diverse Data) system to manage and provide services for Polar data. RAMADDA is a freely available extensible data repository framework that supports a wide range of data types and services to allow the creation, management, discovery and use of data and metadata. The broad range of capabilities provided by RAMADDA and its extensibility makes it well-suited as an archive solution for Polar data. RAMADDA can run in a number of diverse contexts - as a centralized archive, at local institutions, and can even run on an investigator's laptop in the field, providing in-situ metadata and data management services. We are actively developing archives and support for a number of Polar initiatives: - NASA-Arctic Boreal Vulnerability Experiment (ABoVE): ABoVE is a long-term multi-instrument field campaign that will make use of a wide range of data. We have developed an extensive ontology of program, project and site metadata in RAMADDA, in support of the ABoVE Science Definition Team and Project Office. See: http://above.nasa.gov - UNAVCO Terrestrial Laser Scanning (TLS): UNAVCO's Polar program provides support for terrestrial laser scanning field projects. We are using RAMADDA to archive these field projects, with over 40 projects ingested to date. - NASA-IceBridge: As part of the NASA LiDAR Access System (NLAS) project, RAMADDA supports numerous airborne and satellite LiDAR data sets - GLAS, LVIS, ATM, Paris, McORDS, etc. - Antarctic Meteorological Research Center (AMRC): Satellite and surface observation network - Support for numerous other data from AON-ACADIS, Greenland GC-Net, NOAA-GMD, AmeriFlux, etc. In this talk we will discuss some of the challenges that Polar data brings to geoinformatics and describe the approaches we have taken to address these challenges in RAMADDA.
eWaterCycle: A high resolution global hydrological model
NASA Astrophysics Data System (ADS)
van de Giesen, Nick; Bierkens, Marc; Drost, Niels; Hut, Rolf; Sutanudjaja, Edwin
2014-05-01
In 2013, the eWaterCycle project was started, which has the ambitious goal to run a high resolution global hydrological model. Starting point was the PCR-GLOBWB built by Utrecht University. The software behind this model will partially be re-engineered in order to enable to run it in a High Performance Computing (HPC) environment. The aim is to have a spatial resolution of 1km x 1km. The idea is also to run the model in real-time and forecasting mode, using data assimilation. An on-demand hydraulic model will be available for detailed flow and flood forecasting in support of navigation and disaster management. The project faces a set of scientific challenges. First, to enable the model to run in a HPC environment, model runs were analyzed to examine on which parts of the program most CPU time was spent. These parts were re-coded in Open MPI to allow for parallel processing. Different parallelization strategies are thinkable. In our case, it was decided to use watershed logic as a first step to distribute the analysis. There is rather limited recent experience with HPC in hydrology and there is much to be learned and adjusted, both on the hydrological modeling side and the computer science side. For example, an interesting early observation was that hydrological models are, due to their localized parameterization, much more memory intensive than models of sister-disciplines such as meteorology and oceanography. Because it would be deadly to have to swap information between CPU and hard drive, memory management becomes crucial. A standard Ensemble Kalman Filter (enKF) would, for example, have excessive memory demands. To circumvent these problems, an alternative to the enKF was developed that produces equivalent results. This presentation shows the most recent results from the model, including a 5km x 5km simulation and a proof of concept for the new data assimilation approach. Finally, some early ideas about financial sustainability of an operational global hydrological model are presented.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-25
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 13880-000] Cuffs Run Pumped..., Motions To Intervene, and Competing Applications On November 18, 2010, Cuffs Run Pumped Storage, LLC filed... to study the feasibility of the Cuffs Run Pumped Storage Project, located on Cuffs Run and the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 14376-001] Cave Run Energy...: July 21, 2013. d. Submitted By: Cave Run Energy, LLC. e. Name of Project: Cave Run Hydroelectric...: 18 CFR 5.3 of the Commission's regulations. h. Potential Applicant Contact: Mark Boumansour, Cave Run...
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.
2016-01-01
Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.
Brinker, Titus Josef; Rudolph, Stefanie; Richter, Daniela; von Kalle, Christof
2018-05-11
This article describes the DataBox project which offers a perspective of a new health data management solution in Germany. DataBox was initially conceptualized as a repository of individual lung cancer patient data (structured and unstructured). The patient is the owner of the data and is able to share his or her data with different stakeholders. Data is transferred, displayed, and stored online, but not archived. In the long run, the project aims at replacing the conventional method of paper- and storage-device-based handling of data for all patients in Germany, leading to better organization and availability of data which reduces duplicate diagnostic procedures, treatment errors, and enables the training as well as usage of artificial intelligence algorithms on large datasets. ©Titus Josef Brinker, Stefanie Rudolph, Daniela Richter, Christof von Kalle. Originally published in JMIR Cancer (http://cancer.jmir.org), 11.05.2018.
One University's Strategy for Keeping International Projects Running Smoothly
ERIC Educational Resources Information Center
Fischer, Karin
2009-01-01
This article describes how a university tackled some of the basic challenges of internationalizing its campuses. The University of Washington created the Global Support Project, a one-stop shop for faculty and staff members doing research or running programs abroad. The project is run by senior administrators but relies on designated go-to people…
Coordinating Centers in Cancer-Epidemiology Research: The Asia Cohort Consortium Coordinating Center
Rolland, Betsy; Smith, Briana R; Potter, John D
2011-01-01
Although it is tacitly recognized that a good Coordinating Center (CC) is essential to the success of any multi-site collaborative project, very little study has been done on what makes a CC successful, why some CCs fail, or how to build a CC that meets the needs of a given project. Moreover, very little published guidance is available, as few CCs outside the clinical-trial realm write about their work. The Asia Cohort Consortium (ACC) is a collaborative cancer-epidemiology research project that has made strong scientific and organizational progress over the past three years by focusing its CC on the following activities: collaboration development; operations management; statistical and data management; and communications infrastructure and tool development. Our hope is that, by sharing our experience building the ACC CC, we can begin a conversation about what it means to run a coordinating center for multi-institutional collaboration in cancer epidemiology, help other collaborative projects solve some of the issues associated with collaborative research, and learn from others. PMID:21803842
Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios
Banta, Edward R.
2014-01-01
Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.
FOSS Tools for Research Data Management
NASA Astrophysics Data System (ADS)
Stender, Vivien; Jankowski, Cedric; Hammitzsch, Martin; Wächter, Joachim
2017-04-01
Established initiatives and organizations, e.g. the Initiative for Scientific Cyberinfrastructures (NSF, 2007) or the European Strategy Forum on Research Infrastructures (ESFRI, 2008), promote and foster the development of sustainable research infrastructures. These infrastructures aim the provision of services supporting scientists to search, visualize and access data, to collaborate and exchange information, as well as to publish data and other results. In this regard, Research Data Management (RDM) gains importance and thus requires the support by appropriate tools integrated in these infrastructures. Different projects provide arbitrary solutions to manage research data. However, within two projects - SUMARIO for land and water management and TERENO for environmental monitoring - solutions to manage research data have been developed based on Free and Open Source Software (FOSS) components. The resulting framework provides essential components for harvesting, storing and documenting research data, as well as for discovering, visualizing and downloading these data on the basis of standardized services stimulated considerably by enhanced data management approaches of Spatial Data Infrastructures (SDI). In order to fully exploit the potentials of these developments for enhancing data management in Geosciences the publication of software components, e.g. via GitHub, is not sufficient. We will use our experience to move these solutions into the cloud e.g. as PaaS or SaaS offerings. Our contribution will present data management solutions for the Geosciences developed in two projects. A sort of construction kit with FOSS components build the backbone for the assembly and implementation of projects specific platforms. Furthermore, an approach is presented to stimulate the reuse of FOSS RDM solutions with cloud concepts. In further projects specific RDM platforms can be set-up much faster, customized to the individual needs and tools can be added during the run-time.
NASA Astrophysics Data System (ADS)
Purwanggono, Bambang; Margarette, Anastasia
2017-12-01
Completion time of highway construction is very meaningful for smooth transportation, moreover expected number of ownership motor vehicle will increase each year. Therefore, this study was conducted with to analyze the constraints that contained in an infrastructure development project. This research was conducted on Jatingaleh Underpass Project, Semarang. This research was carried out while the project is running, on the implementation, this project is experiencing delays. This research is done to find out what are the constraints that occur in execution of a road infrastructure project, in particular that causes delays. The method that used to find the root cause is fishbone diagram to obtain a possible means of mitigation. Coupled with the RFMEA method used to determine the critical risks that must be addressed immediately on road infrastructure project. The result of data tabulation in this study indicates that the most possible mitigation tool to make a Standard Operating Procedure (SOP) recommendations to disrupt utilities that interfere project implementation. Process of risk assessment has been carried out systematically based on ISO 31000:2009 on risk management and for determination of delayed variables, the requirements of process groups according to ISO 21500:2013 on project management were used.
Gassman, Philip W.; Tisl, J.A.; Palas, E.A.; Fields, C.L.; Isenhart, T.M.; Schilling, K.E.; Wolter, C.F.; Seigley, L.S.; Helmers, M.J.
2010-01-01
Coldwater trout streams are important natural resources in northeast Iowa. Extensive efforts have been made by state and federal agencies to protect and improve water quality in northeast Iowa streams that include Sny Magill Creek and Bloody Run Creek, which are located in Clayton County. A series of three water quality projects were implemented in Sny Magill Creek watershed during 1988 to 1999, which were supported by multiple agencies and focused on best management practice (BMP) adoption. Water quality monitoring was performed during 1992 to 2001 to assess the impact of these installed BMPs in the Sny Magill Creek watershed using a paired watershed approach, where the Bloody Run Creek watershed served as the control. Conservation practice adoption still occurred in the Bloody Run Creek watershed during the 10-year monitoring project and accelerated after the project ended, when a multiagency supported water quality project was implemented during 2002 to 2007. Statistical analysis of the paired watershed results using a pre/post model indicated that discharge increased 8% in Sny Magill Creek watershed relative to the Bloody Run Creek watershed, turbidity declined 41%, total suspended sediment declined 7%, and NOx-N (nitrate-nitrogen plus nitrite-nitrogen) increased 15%. Similar results were obtained with a gradual change statistical model.The weak sediment reductions and increased NOx-N levels were both unexpected and indicate that dynamics between adopted BMPs and stream systems need to be better understood. Fish surveys indicate that conditions for supporting trout fisheries have improved in both streams. Important lessons to be taken from the overall study include (1) committed project coordinators, agency collaborators, and landowners/producers are all needed for successful water quality projects; (2) smaller watershed areas should be used in paired studies; (3) reductions in stream discharge may be required in these systems in order for significant sediment load decreases to occur; (4) long-term monitoring on the order of decades can be required to detect meaningful changes in water quality in response to BMP implementation; and (5) all consequences of specific BMPs need to be considered when considering strategies for watershed protection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waye, Scot
Power electronics that use high-temperature devices pose a challenge for thermal management. With the devices running at higher temperatures and having a smaller footprint, the heat fluxes increase from previous power electronic designs. This project overview presents an approach to examine and design thermal management strategies through cooling technologies to keep devices within temperature limits, dissipate the heat generated by the devices and protect electrical interconnects and other components for inverter, converter, and charger applications. This analysis, validation, and demonstration intends to take a multi-scale approach over the device, module, and system levels to reduce size, weight, and cost.
Run Environment and Data Management for Earth System Models
NASA Astrophysics Data System (ADS)
Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.
2009-04-01
The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Members of the engineering team are meeting in the Launch Control Center to review data and possible troubleshooting plans for the liquid hydrogen tank low-level fuel cut-off sensor. At left is John Muratore, manager of Systems Engineering and Integration for the Space Shuttle Program; Ed Mango, JSC deputy manager of the orbiter project office; and Carol Scott, KSC Integration Manager. The sensor failed a routine prelaunch check during the launch countdown July 13, causing mission managers to scrub Discovery's first launch attempt. The sensor protects the Shuttle's main engines by triggering their shutdown in the event fuel runs unexpectedly low. The sensor is one of four inside the liquid hydrogen section of the External Tank (ET).
2014-05-21
CAPE CANAVERAL, Fla. – Jon Olansen, Morpheus project manager, speaks to members of the media inside a facility near the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. Behind Olansen is the Project Morpheus prototype lander. Project Morpheus tests NASA’s autonomous landing and hazard avoidance technology, or ALHAT, sensors and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Frankie Martin
2014-05-21
CAPE CANAVERAL, Fla. – Jon Olansen, Morpheus project manager, speaks to members of the media inside a facility near the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. Behind Olansen is the Project Morpheus prototype lander. Project Morpheus tests NASA’s autonomous landing and hazard avoidance technology, or ALHAT, sensors and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Frankie Martin
2014-05-21
CAPE CANAVERAL, Fla. – Chirold Epp, the Autonomous Landing and Hazard Avoidance Technology, or ALHAT, project manager, speaks to members of the media inside a facility near the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. Behind Epp is the Project Morpheus prototype lander. Project Morpheus tests NASA’s ALHAT sensors and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Frankie Martin
Data base management system analysis and performance testing with respect to NASA requirements
NASA Technical Reports Server (NTRS)
Martin, E. A.; Sylto, R. V.; Gough, T. L.; Huston, H. A.; Morone, J. J.
1981-01-01
Several candidate Data Base Management Systems (DBM's) that could support the NASA End-to-End Data System's Integrated Data Base Management System (IDBMS) Project, later rescoped and renamed the Packet Management System (PMS) were evaluated. The candidate DBMS systems which had to run on the Digital Equipment Corporation VAX 11/780 computer system were ORACLE, SEED and RIM. Oracle and RIM are both based on the relational data base model while SEED employs a CODASYL network approach. A single data base application which managed stratospheric temperature profiles was studied. The primary reasons for using this application were an insufficient volume of available PMS-like data, a mandate to use actual rather than simulated data, and the abundance of available temperature profile data.
2014-05-21
CAPE CANAVERAL, Fla. – From left behind the reporter in the white shirt, Chirold Epp, the Autonomous Landing and Hazard Avoidance Technology, or ALHAT, project manager, Jon Olansen, Morpheus project manager, and Greg Gaddis, Morpheus/ALHAT site director, speak to members of the media near the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. Media also viewed Morpheus inside a facility near the landing facility. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Frankie Martin
Resource Management for Real-Time Adaptive Agents
NASA Technical Reports Server (NTRS)
Welch, Lonnie; Chelberg, David; Pfarr, Barbara; Fleeman, David; Parrott, David; Tan, Zhen-Yu; Jain, Shikha; Drews, Frank; Bruggeman, Carl; Shuler, Chris
2003-01-01
Increased autonomy and automation in onboard flight systems offer numerous potential benefits, including cost reduction and greater flexibility. The existence of generic mechanisms for automation is critical for handling unanticipated science events and anomalies where limitations in traditional control software with fixed, predetermined algorithms can mean loss of science data and missed opportunities for observing important terrestrial events. We have developed such a mechanism by adding a Hierarchical Agent-based ReaLTime technology (HART) extension to our Dynamic Resource Management (DRM) middleware. Traditional DRM provides mechanisms to monitor the realtime performance of distributed applications and to move applications among processors to improve real-time performance. In the HART project we have designed and implemented a performance adaptation mechanism to improve reaktime performance. To use this mechanism, applications are developed that can run at various levels of quality. The DRM can choose a setting for the quality level of an application dynamically at run-time in order to manage satellite resource usage more effectively. A groundbased prototype of a satellite system that captures and processes images has also been developed as part of this project to be used as a benchmark for evaluating the resource management framework A significant enhancement of this generic mission-independent framework allows scientists to specify the utility, or "scientific benefit," of science observations under various conditions like cloud cover and compression method. The resource manager then uses these benefit tables to determine in redtime how to set the quality levels for applications to maximize overall system utility as defined by the scientists running the mission. We also show how maintenance functions llke health and safety data can be integrated into the utility framework. Once thls framework has been certified for missions and successfully flight tested it can be reused with little development overhead for other missions. In contrast, current space missions llke Swift manage similar types of resource trade -off completely with the scientific application code itself, and such code must be re-certified and tested for each mission even if a large portion of the code base is shared. This final report discusses some of the major issues motivating this research effort, provides a literature review of the related work, discusses the resource management framework and ground-based satellite system prototype that has been developed, indicates what work is yet to be performed, and provides a list of publications resulting from this work.
The Portland Harbor Superfund Site Sustainability Project: Introduction.
Fitzpatrick, Anne G; Apitz, Sabine E; Harrison, David; Ruffle, Betsy; Edwards, Deborah A
2018-01-01
This article introduces the Portland Harbor Superfund Site Sustainability Project (PHSP) special series in this issue. The Portland Harbor Superfund Site is one of the "mega-sediment sites" in the United States, comprising about 10 miles of the Lower Willamette River, running through the heart of Portland, Oregon. The primary aim of the PHSP was to conduct a comprehensive sustainability assessment, integrating environmental, economic, and social considerations of a selection of the remedial alternatives laid out by the US Environmental Protection Agency. A range of tools were developed for this project to quantitatively address environmental, economic, and social costs and benefits based upon diverse stakeholder values. In parallel, a probabilistic risk assessment was carried out to evaluate the risk assumptions at the core of the remedial investigation and feasibility study process. Integr Environ Assess Manag 2018;14:17-21. © 2017 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2017 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).
Integration of EGA secure data access into Galaxy.
Hoogstrate, Youri; Zhang, Chao; Senf, Alexander; Bijlard, Jochem; Hiltemann, Saskia; van Enckevort, David; Repo, Susanna; Heringa, Jaap; Jenster, Guido; J A Fijneman, Remond; Boiten, Jan-Willem; A Meijer, Gerrit; Stubbs, Andrew; Rambla, Jordi; Spalding, Dylan; Abeln, Sanne
2016-01-01
High-throughput molecular profiling techniques are routinely generating vast amounts of data for translational medicine studies. Secure access controlled systems are needed to manage, store, transfer and distribute these data due to its personally identifiable nature. The European Genome-phenome Archive (EGA) was created to facilitate access and management to long-term archival of bio-molecular data. Each data provider is responsible for ensuring a Data Access Committee is in place to grant access to data stored in the EGA. Moreover, the transfer of data during upload and download is encrypted. ELIXIR, a European research infrastructure for life-science data, initiated a project (2016 Human Data Implementation Study) to understand and document the ELIXIR requirements for secure management of controlled-access data. As part of this project, a full ecosystem was designed to connect archived raw experimental molecular profiling data with interpreted data and the computational workflows, using the CTMM Translational Research IT (CTMM-TraIT) infrastructure http://www.ctmm-trait.nl as an example. Here we present the first outcomes of this project, a framework to enable the download of EGA data to a Galaxy server in a secure way. Galaxy provides an intuitive user interface for molecular biologists and bioinformaticians to run and design data analysis workflows. More specifically, we developed a tool -- ega_download_streamer - that can download data securely from EGA into a Galaxy server, which can subsequently be further processed. This tool will allow a user within the browser to run an entire analysis containing sensitive data from EGA, and to make this analysis available for other researchers in a reproducible manner, as shown with a proof of concept study. The tool ega_download_streamer is available in the Galaxy tool shed: https://toolshed.g2.bx.psu.edu/view/yhoogstrate/ega_download_streamer.
Integration of EGA secure data access into Galaxy
Hoogstrate, Youri; Zhang, Chao; Senf, Alexander; Bijlard, Jochem; Hiltemann, Saskia; van Enckevort, David; Repo, Susanna; Heringa, Jaap; Jenster, Guido; Fijneman, Remond J.A.; Boiten, Jan-Willem; A. Meijer, Gerrit; Stubbs, Andrew; Rambla, Jordi; Spalding, Dylan; Abeln, Sanne
2016-01-01
High-throughput molecular profiling techniques are routinely generating vast amounts of data for translational medicine studies. Secure access controlled systems are needed to manage, store, transfer and distribute these data due to its personally identifiable nature. The European Genome-phenome Archive (EGA) was created to facilitate access and management to long-term archival of bio-molecular data. Each data provider is responsible for ensuring a Data Access Committee is in place to grant access to data stored in the EGA. Moreover, the transfer of data during upload and download is encrypted. ELIXIR, a European research infrastructure for life-science data, initiated a project (2016 Human Data Implementation Study) to understand and document the ELIXIR requirements for secure management of controlled-access data. As part of this project, a full ecosystem was designed to connect archived raw experimental molecular profiling data with interpreted data and the computational workflows, using the CTMM Translational Research IT (CTMM-TraIT) infrastructure http://www.ctmm-trait.nl as an example. Here we present the first outcomes of this project, a framework to enable the download of EGA data to a Galaxy server in a secure way. Galaxy provides an intuitive user interface for molecular biologists and bioinformaticians to run and design data analysis workflows. More specifically, we developed a tool -- ega_download_streamer - that can download data securely from EGA into a Galaxy server, which can subsequently be further processed. This tool will allow a user within the browser to run an entire analysis containing sensitive data from EGA, and to make this analysis available for other researchers in a reproducible manner, as shown with a proof of concept study. The tool ega_download_streamer is available in the Galaxy tool shed: https://toolshed.g2.bx.psu.edu/view/yhoogstrate/ega_download_streamer. PMID:28232859
NASA Technical Reports Server (NTRS)
Gill, Roger; Schnase, John L.
2012-01-01
The Invasive Species Forecasting System (ISFS) is an online decision support system that allows users to load point occurrence field sample data for a plant species of interest and quickly generate habitat suitability maps for geographic regions of interest, such as a national park, monument, forest, or refuge. Target customers for ISFS are natural resource managers and decision makers who have a need for scientifically valid, model- based predictions of the habitat suitability of plant species of management concern. In a joint project involving NASA and the Maryland Department of Natural Resources, ISFS has been used to model the potential distribution of Wavyleaf Basketgrass in Maryland's Chesapeake Bay Watershed. Maximum entropy techniques are used to generate predictive maps using predictor datasets derived from remotely sensed data and climate simulation outputs. The workflow to run a model is implemented in an iRODS microservice using a custom ISFS file driver that clips and re-projects data to geographic regions of interest, then shells out to perform MaxEnt processing on the input data. When the model completes, all output files and maps from the model run are registered in iRODS and made accessible to the user. The ISFS user interface is a web browser that uses the iRODS PHP client to interact with the ISFS/iRODS- server. ISFS is designed to reside in a VMware virtual machine running SLES 11 and iRODS 3.0. The ISFS virtual machine is hosted in a VMware vSphere private cloud infrastructure to deliver the online service.
An overview of the DII-HEP OpenStack based CMS data analysis
NASA Astrophysics Data System (ADS)
Osmani, L.; Tarkoma, S.; Eerola, P.; Komu, M.; Kortelainen, M. J.; Kraemer, O.; Lindén, T.; Toor, S.; White, J.
2015-05-01
An OpenStack based private cloud with the Cluster File System has been built and used with both CMS analysis and Monte Carlo simulation jobs in the Datacenter Indirection Infrastructure for Secure High Energy Physics (DII-HEP) project. On the cloud we run the ARC middleware that allows running CMS applications without changes on the job submission side. Our test results indicate that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability. To manage the virtual machines (VM) dynamically in an elastic fasion, we are testing the EMI authorization service (Argus) and the Execution Environment Service (Argus-EES). An OpenStackplugin has been developed for Argus-EES. The Host Identity Protocol (HIP) has been designed for mobile networks and it provides a secure method for IP multihoming. HIP separates the end-point identifier and locator role for IP address which increases the network availability for the applications. Our solution leverages HIP for traffic management. This presentation gives an update on the status of the work and our lessons learned in creating an OpenStackbased cloud for HEP.
NASA Astrophysics Data System (ADS)
Wang, J.; Yin, H.; Chung, F.
2008-12-01
While the population growth, the future land use change, and the desire for better environmental preservation and protection are adding up pressure on water resources management in California, California is facing an extra challenge of addressing potential climate change impacts on water supple and demand in California. The concerns on water facilities planning and flood control caused by climate change include modified precipitation patterns, changes in snow levels and runoff patterns due to increased air temperatures. Although long-term climate projections are largely uncertain, there appears to be a strong consistency in predicting the warming trend of future surface temperature, and the resulting shift in the seasonal patterns of runoff. However, projected changes in precipitation (wetting or drying), which control annual runoff, are far less certain. This paper attempts to separate the effects of warming trend from the effects of precipitation trend on water planning especially in California where reservoir operations are more sensitive to seasonal patterns of runoff than to the total annual runoff. The water resources systems planning model, CALSIM2, is used to evaluate climate change impact on water resource management in California. Rather than directly ingesting estimated streamflows from climate model projections into CALSIM2, a three step perturbation ratio method is proposed to introduce climate change impact into the planning model. Firstly, monthly perturbation ratio of projected monthly inflow to simulated historical monthly inflow is applied to observed historical monthly inflow to generate climate change inflows to major dams and reservoirs. To isolate the effects of warming trend on water resources, a further annual inflow adjustment is applied to the inflows generated in step one to preserve the volume of the observed annual inflow. To re-introduce the effects of precipitation trend on water resources, an additional inflow trend adjustment is applied to the adjusted climate change inflow. Therefore, three CALSIM2 experiments will be implemented: (1) base run with the observed historic inflow (1921 to 2003); (2) sensitivity run with the adjusted climate change inflow through annual inflow adjustment; (3) sensitivity run with the adjusted climate change inflow through annual inflow adjustment and inflow trend adjustment. To account for the variability of various climate models in projecting future climates, the uncertainty in future emission scenarios, and the difference in different projection periods, estimated inflows from 6 climate models for 2 emission scenarios (A2 and B1) and two projection periods (2030-2059 and 2070-2099) are included in the CALSIM model experiments.
ERIC Educational Resources Information Center
Field, Anne
2011-01-01
Strengthening after-school programming for city youngsters has long been an objective of The Wallace Foundation, a national philanthropy based in New York City. In its work over the years, Wallace has found that weak financial management of the nonprofits running many high-quality programs hampers their ability to improve and expand. In 2009,…
Support for the Naval Research Laboratory Environmental Passive Microwave Remote Sensing Program.
1983-04-29
L. H. Gesell te _ C= Project Manager ’ . . , . ".."........... . ., . q J ABSTRACT This document summarizes the data acquisition, reduc- tion, and...film camera , and other environmental sensors. CSC gradually assumed the bulk of the responsibility for opera- ting this equipment. This included running...radiometers, and setting up and operating the strip-film camera and other en- vironmental sensors. Also of significant importance to the missions was
Center for Plasma Edge Simulation (CPES) -- Rutgers University Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, Manish
2014-03-06
The CPES scientific simulations run at scale on leadership class machines, collaborate at runtime and produce and exchange large data sizes, which present multiple I/O and data management challenges. During the CPES project, the Rutgers team worked with the rest of the CPES team to address these challenges at different levels, and specifically (1) at the data transport and communication level through the DART (Decoupled and Asynchronous Remote Data Transfers) framework, and (2) at the data management and services level through the DataSpaces and ActiveSpaces frameworks. These frameworks and their impact are briefly described.
The determination of measures of software reliability
NASA Technical Reports Server (NTRS)
Maxwell, F. D.; Corn, B. C.
1978-01-01
Measurement of software reliability was carried out during the development of data base software for a multi-sensor tracking system. The failure ratio and failure rate were found to be consistent measures. Trend lines could be established from these measurements that provide good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.
Reliability measurement during software development. [for a multisensor tracking system
NASA Technical Reports Server (NTRS)
Hecht, H.; Sturm, W. A.; Trattner, S.
1977-01-01
During the development of data base software for a multi-sensor tracking system, reliability was measured. The failure ratio and failure rate were found to be consistent measures. Trend lines were established from these measurements that provided good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.
Chimbari, Moses John
2017-11-01
Ecohealth projects are designed to garner ownership among all stakeholders, such as researchers, communities, local leadership and policy makers. Ideally, designs should ensure that implementation goes smoothly and that findings from studies benefit the stakeholders, particularly bringing changes to the communities researched. Paradoxically, the process is fraught with challenges associated with implementation. Notwithstanding these challenges, evidence from projects implemented in southern Africa justify the need to invest in the subject of ecohealth. This paper describes and discusses a principal investigator's experience of leading ecohealth projects in Zimbabwe between 2002 and 2005, in Botswana between 2010 and 2014 and in South Africa (ongoing). The discourse is centred on issues of project management and leadership, transdisciplinarity, students' involvement, data management, community engagement, dissemination of research findings and the role of institutions in project management and implementation. The paper concludes that the ecohealth approach is valuable and should be encouraged making the following recommendations; 1) principal investigators must have a good understanding of socio-ecological systems, have excellent project management and writing skills, 2) more than one PI should be involved in the day-to-day running of the project in order to avoid disruption of project activities in the event that the PI leaves the project before it ends, 3) researchers should be trained in ecohealth principles and methodologies at the time of building the research teams, 4) full proposals should be developed with active participation of communities and stakeholders in order to develop a shared vision, 5) involvement of postdoctoral fellows and dedicated researchers with postgraduate students should be encouraged to avoid situations where some objectives are not fully addressed because of the narrow nature of students' work; and 6) citizen science should be encouraged to empower communities and ensure that certain activities continue after project termination. Copyright © 2016 Elsevier B.V. All rights reserved.
The DEVELOP Program as a Unique Applied Science Internship
NASA Astrophysics Data System (ADS)
Skiles, J. W.; Schmidt, C. L.; Ruiz, M. L.; Cawthorn, J.
2004-12-01
The NASA mission includes "Inspiring the next generation of explorers" and "Understanding and protecting our home planet". DEVELOP students conduct research projects in Earth Systems Science, gaining valuable training and work experience, which support accomplishing this mission. This presentation will describe the DEVELOP Program, a NASA human capital development initiative, which is student run and student led with NASA scientists serving as mentors. DEVELOP began in 1998 at NASA's Langley Research Center in Virginia and expanded to NASA's Stennis Space Center in Mississippi and Marshall Space Flight Center in Alabama in 2002. NASA's Ames Research Center in California began DEVELOP activity in 2003. DEVELOP is a year round activity. High school through graduate school students participate in DEVELOP with students' backgrounds encompassing a wide variety of academic majors such as engineering, biology, physics, mathematics, computer science, remote sensing, geographic information systems, business, and geography. DEVELOP projects are initiated when county, state, or tribal governments submit a proposal requesting students work on local projects. When a project is selected, science mentors guide students in the application of NASA applied science and technology to enhance decision support tools for customers. Partnerships are established with customers, professional organizations and state and federal agencies in order to leverage resources needed to complete research projects. Student teams are assigned a project and are responsible for creating an inclusive project plan beginning with the design and approach of the study, the timeline, and the deliverables for the customer. Project results can consist of student papers, both team and individually written, face-to-face meetings and seminars with customers, presentations at national meetings in the form of posters and oral papers, displays at the Western and Southern Governors' Associations, and visualizations produced by the students. Projects have included Homeland Security in Virginia, Energy Management in New Mexico, Water Management in Mississippi, Air Quality Management in Alabama, Invasive Species mapping in Nevada, Public Health risk assessment in California, Disaster Management in Oklahoma, Agricultural Efficiency in South Dakota, Coastal Management in Louisiana and Carbon Management in Oregon. DEVELOP students gain experience in applied science, computer technology, and project management. Several DEVELOP projects will be demonstrated and discussed during this presentation. DEVELOP is sponsored by the Applications Division of NASA's Science Mission Directorate.
Data management for support of the Oregon Transect Ecosystem Research (OTTER) project
NASA Technical Reports Server (NTRS)
Skiles, J. W.; Angelici, Gary L.
1993-01-01
Management of data collected during projects that involve large numbers of scientists is an often overlooked aspect of the experimental plan. Ecosystem science projects like the Oregon Transect Ecosystem Research (OTTER) Project that involve many investigators from many institutions and that run for multiple years, collect and archive large amounts of data. These data range in size from a few kilobytes of information for such measurements as canopy chemistry and meteorological variables, to hundreds of megabytes of information for such items as views from multi-band spectrometers flown on aircraft and scenes from imaging radiometers aboard satellites. Organizing and storing data from the OTTER Project, certifying those data, correcting errors in data sets, validating the data, and distributing those data to other OTTER investigators is a major undertaking. Using the National Aeronautics and Space Administration's (NASA) Pilot Land Data System (PLDS), a Support mechanism was established for the OTTER Project which accomplished all of the above. At the onset of the interaction between PLDS and OTTER, it was not certain that PLDS could accomplish these tasks in a manner that would aid researchers in the OTTER Project. This paper documents the data types that were collected under the auspices of the OTTER Project and the procedures implemented to store, catalog, validate, and certify those data. The issues of the compliance of investigators with data-management requirements, data use and certification, and the ease of retrieving data are discussed. We advance the hypothesis that formal data management is necessary in ecological investigations involving multiple investigators using many data gathering instruments and experimental procedures. The issues and experience gained in this exercise give an indication of the needs for data management systems that must be addressed in the coming decades when other large data-gathering endeavors are undertaken by the ecological science community.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-17
... with the project. Applicant Contact: Daniel R. Irvin, Free Flow Power Corporation, 33 Commercial Street... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 13876-000] South Run Pumped... the Federal Power Act (FPA), proposing to study the feasibility of the South Run Pumped Storage...
Developing a virtual engineering management community
NASA Astrophysics Data System (ADS)
Hewitt, Bill; Kidd, Moray; Smith, Robin; Wearne, Stephen
2016-03-01
The paper reviews the lessons of planning and running an Engineering Management practitioner development programme in a partnership between BP and the University of Manchester. This distance-learning programme is for professional engineers in mid-career experienced in the engineering and support activities for delivering safe, compliant and reliable projects and operations worldwide. The programme concentrates on the why and how of leadership and judgement in managing the engineering of large and small projects and operational support. Two intensive residential weeks are combined with a virtual learning environment over one year. Assessed assignments between and after the residential weeks provide opportunities for individual reflective learning for each delegate through applying concepts and the lessons of case studies to their experience, current challenges and expected responsibilities. This successful partnership between a major global company and a university rich in research and teaching required a significant dedication of intellectual and leadership effort by all concerned. The rewards for both parties and most importantly for the engineers themselves are extensive.
Oncology data management in the UK--BODMA's view. British Oncology Data Managers Association.
Riley, D.; Ward, L.; Young, T.
1994-01-01
Over the past 10 years, the original partnership of clinician and statistician for the running of clinical research projects, especially clinical trials, has come to be supplemented by the data manager and trial coordinator. Increasing numbers of such personnel are now being employed, covering a wide diversity of work areas, including clinical research, medical audit and the cancer registries. The British Oncology Data Managers Association (BODMA) was founded in 1987 and is now in a good position to review the current status of data management in the UK. It is proposed that a national network of data managers and trial coordinators within specialist trials centres, oncology departments and district general hospitals, with a good training programme, plus a recognised career structure, is the way to make the best use of this key resource. BODMA is addressing many of these issues and aims to improve and maintain the quality of data management. PMID:8080719
Valente, R; Cambiaso, F; Santori, G; Ghirelli, R; Gianelli, A; Valente, U
2004-04-01
In Italy, health-care telematic is funded and supported at the level of national government or regional institutions. In 1999, the Italian Ministry of Health started to fund the Liguria-Trento Transplant Network (LTTN) project, a health research project with the aim to build an informative system for donor management and transplantation activity in a macroregional area. At the time of LTTN project proposal, no published Transplant Network Informative System fulfilled Italian rules on telematic management of electronic documentation concerning transplantation activity. Partnership of LTTN project were two Regional Transplant Coordinating Centres, Nord Italia Transplant Interregional Coordinating Centre and the Italian Institute of Health/National Transplant Coordinating Centre. Project Total Quality Management methods were adopted. Technological and case analysis followed ANSI-HL7, CEN-TC251, and Object-Oriented Software Engineering standards. A low-tech prototype powered by a web access relational database is running on a transplant network including web-based clients located in 17 intensive care units, in Nord Italia Transplant Interregional Coordinating Centre, and at the Italian Institute of Health/National Transplant Coordinating Centre. LTTN registry includes pretransplant, surgical, and posttransplant phases regarding liver, kidney, pancreas, and kidney-pancreas transplantation in adult and pediatric recipients. Clinical specifications were prioritized in agreement with the RAND/UCLA appropriateness method. Further implementation will include formal rules for data access and output release, fault tolerance, and a continuous registry evolution plan.
NASA Technical Reports Server (NTRS)
Wright, Michael R.
1999-01-01
With over two dozen missions since the first in 1986, the Hitchhiker project has a reputation for providing quick-reaction, low-cost flight services for Shuttle Small Payloads Project (SSPP) customers. Despite the successes, several potential improvements in customer payload integration and test (I&T) deserve consideration. This paper presents suggestions to Hitchhiker customers on how to help make the I&T process run smoother. Included are: customer requirements and interface definition, pre-integration test and evaluation, configuration management, I&T overview and planning, problem mitigation, and organizational communication. In this era of limited flight opportunities and new ISO-based requirements, issues such as these have become more important than ever.
Recent developments in user-job management with Ganga
NASA Astrophysics Data System (ADS)
Currie, R.; Elmsheuser, J.; Fay, R.; Owen, P. H.; Richards, A.; Slater, M.; Sutcliffe, W.; Williams, M.
2015-12-01
The Ganga project was originally developed for use by LHC experiments and has been used extensively throughout Run1 in both LHCb and ATLAS. This document describes some the most recent developments within the Ganga project. There have been improvements in the handling of large scale computational tasks in the form of a new GangaTasks infrastructure. Improvements in file handling through using a new IGangaFile interface makes handling files largely transparent to the end user. In addition to this the performance and usability of Ganga have both been addressed through the development of a new queues system allows for parallel processing of job related tasks.
NASA Astrophysics Data System (ADS)
Angius, S.; Bisegni, C.; Ciuffetti, P.; Di Pirro, G.; Foggetta, L. G.; Galletti, F.; Gargana, R.; Gioscio, E.; Maselli, D.; Mazzitelli, G.; Michelotti, A.; Orrù, R.; Pistoni, M.; Spagnoli, F.; Spigone, D.; Stecchi, A.; Tonto, T.; Tota, M. A.; Catani, L.; Di Giulio, C.; Salina, G.; Buzzi, P.; Checcucci, B.; Lubrano, P.; Piccini, M.; Fattibene, E.; Michelotto, M.; Cavallaro, S. R.; Diana, B. F.; Enrico, F.; Pulvirenti, S.
2016-01-01
The paper is aimed to present the !CHAOS open source project aimed to develop a prototype of a national private Cloud Computing infrastructure, devoted to accelerator control systems and large experiments of High Energy Physics (HEP). The !CHAOS project has been financed by MIUR (Italian Ministry of Research and Education) and aims to develop a new concept of control system and data acquisition framework by providing, with a high level of aaabstraction, all the services needed for controlling and managing a large scientific, or non-scientific, infrastructure. A beta version of the !CHAOS infrastructure will be released at the end of December 2015 and will run on private Cloud infrastructures based on OpenStack.
2014-12-11
CAPE CANAVERAL, Fla. – NASA Project Morpheus prototype lander is being lifted by crane during preparations for free flight test number 15 at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
2014-12-10
CAPE CANAVERAL, Fla. – NASA's Project Morpheus prototype lander is being transported to the north end of the Shuttle Landing Facility for free flight test number 15 at NASA’s Kennedy Space Center in Florida. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
2014-12-10
CAPE CANAVERAL, Fla. – Engineers and technicians prepare NASA's Project Morpheus prototype lander for free flight test number 15 on a launch pad at the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
2014-12-11
CAPE CANAVERAL, Fla. – Engineers and technicians prepare NASA's Project Morpheus prototype lander for free flight test number 15 at the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
2014-12-11
CAPE CANAVERAL, Fla. – NASA's Project Morpheus prototype lander is prepared for transport to the north end of the Shuttle Landing Facility for free flight test number 15 at NASA’s Kennedy Space Center in Florida. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
2014-12-11
CAPE CANAVERAL, Fla. – Engineers and technicians prepare NASA's Project Morpheus prototype lander for free flight test number 15 at the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
2014-12-10
CAPE CANAVERAL, Fla. – NASA Project Morpheus prototype lander and support equipment are being transported to the north end of the Shuttle Landing Facility for free flight test number 15 at NASA’s Kennedy Space Center in Florida. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
Simulation environment and graphical visualization environment: a COPD use-case
2014-01-01
Background Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. Results In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. This simulation environment has been validated with the integration of three models: two deterministic, i.e. based on linear and differential equations, and one probabilistic, i.e., based on probability theory. These models have been selected based on the disease under study in this project, i.e., chronic obstructive pulmonary disease. Conclusion It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios. PMID:25471327
Translating visions into realities.
Nesje, Arne
2006-08-01
The overall vision and the building concept. The overall vision with individual buildings that have focal points for the related medical treatment may seem to increase both investment and operational cost, especially in the period until the total hospital is finished (2014). The slogan "Better services at lower cost" is probably a vision that will prove to be hard to fulfil. But the patients will probably be the long-term winners with single rooms with bathroom, high standards of service, good architecture and a pleasant environment. The challenge will be to get the necessary funding for running the hospital. The planning process and project management Many interviewees indicate how difficult it is to combine many functions and requirements in one building concept. Different architectural, technical, functional and economic interests will often cause conflict. The project organisation HBMN was organised outside the administration of both STOLAV and HMN. A closer connection and better co-operation with STOLAV may have resulted in more influence from the medical employees. It is probably fair to anticipate that the medical employees would have felt more ownership of the process and thus be more satisfied with the concept and the result. On the other hand the organisation of the project outside the hospital administration may have contributed to better control and more professional management of the construction project. The management for planning and building (technical programme, environmental programme, aesthetical The need for control on site was probably underestimated. For STOLAV technical department (TD) the building process has been time-consuming by giving support, making controls and preparing the take-over phase. But during this process they have become better trained to run and operate the new centres. The commissioning phase has been a challenging time. There were generally more changes, supplementation and claims than anticipated. The investment costs are nearly on budget, but the concept will have a negative effect on the running costs of the hospital. The budgets for running both the old and new hospital until phase 2 is finished have not been calculated properly and can reduce the level of ambition in the further process. The moving in phase was a little postponed for some of the centres. Only the laboratory centre has moved in on time. It was realised immediately that the buildings services had not been tested out and adjusted properly. In addition it was acknowledged that there should have been a better training programme for the new organisation before moving in. This situation was extremely challenging for the medical employees. With experience from building phase 1, HBMN decided to change the contract model for building phase 2 to a partnering process. That means higher involvement of suppliers and subcontractors with a goal to improve both quality and efficiency. A greater involvement of all participants is needed to share the risk but also reduce costs.
2011-08-01
4 Body ...Report requirement. 5 Body The approved Statement of Work proposed the following timeline (Table 1): Table 1. Timeline for approved project...Figure 1) were tested for this project including the 1E90 Sprinter (OttoBock Inc.), Flex-Run (Ossur), Cheetah ® (Ossur) and Nitro Running Foot (Freedom
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-17
....5 miles of new and upgraded access roads. The Enloe Project would operate automatically in a run-of... run-of-river and implementing agency-recommended ramping rates downstream of the project during... effects on geology and soils and water quality. Run-of-river operation would minimize effects on aquatic...
[Administration of the "Healthy School" project].
Bjegović, V; Zivković, M; Marinković, J; Vuković, D; Legetić, B
1996-01-01
The term of project management is commonly used to describe the work of a team that is handling a special program. In this type of management, a form of leadership which creates environment, enables fast movement of participants through different work phases achieving the common aims, is used [1-4]. The "Healthy School" Project, launched in almost all European countries, has been taking place in Yugoslavia since the end of 1991 [5]. The project developed within the country designed as a health promotion-education intervention study in primary schools. The network of 13 schools on 11 locations representing typical economic, cultural and social environments, was established to cover the country. Although the proposed methodological approach from WHO was followed [6], the specific situation in the country (economic crisis, break down of Yugoslav Federation, the war and international blockade) distated the particular modification. The management of the Healthy School Project in general, and in Yugoslavia particularly, is based upon project management structure (Scheme 1). The objective of this research was to assess the Healthy School project management in Yugoslavia, by measuring causal, intervening and output variables. In the process of assessing the management in general, three groups of criteria are commonly used: (a) causal (those that influence the course of developments in the Project), (b) intervening (representing the current condition of the internal state of the Project), and (c) output (that reflect the Project achievements). (a) For the purpose of this study the causal criteria were measured by analyzing the overall management strategy and the level of agreement in objectives of the Project itself, the Project Coordinators and main participants in the Project. (b) The intervening criteria used in this assessment were: the time spent on different project activities, the estimate of the severity of the problems in different aspects of project management, the level of personal influence on different aspects of Project development and overall work motivation and satisfaction of all participants. (c) The outcomes of the given management attempts were analyzed by the following output variables: the number of different types of meetings held, the number of seminars, mass media presentation and articles, the amount of money raised and the number of questionnaires administered. Triangular method was used to gather the data: (1) direct observation, (2) four types of questionnaires and (3) project reports and documentation. Four types of specially designed questionnaires were used to examine four groups of participants (Project Coordinators, School Project Managers, Directors and Project Co-operators). The questionnaires were different in the questions concerning examinees' project tasks and types of external communication, while the questions referring to personal characteristics, general features of the project (goals, common jobs, participation in decision making, motivation and satisfaction) were the same for all groups. The average age of the project participants was 45.50 ranging from 25 to 60 years of age. The oldest group was the group of School Directors, while the youngest were School Co-operators. The project has been run mostly by women, while men were predominantly represented in the group of School Directors. The teaching occupation is presented by 61.8%, the rest being health professionals, mostly of preventive orientation. The analysis and classification of participants goals verify that the personal goals of all participants correspond with the main Project goals. Certain groups have also some additional motives which support their successful and affective movement towards the overall Project goals. The largest problem in all groups appear to be in the field of financing the Project activities (Figure 1). (ABSTRACT TRUNCATED)
Stefan, Teodora Cristina; Elharar, Nicole; Garcia, Guadalupe
2018-05-01
Parkinson disease (PD) is a progressive, debilitating neurodegenerative disease that often requires complex pharmacologic treatment regimens. Prior to this clinic, there was no involvement of a clinical pharmacy specialist (CPS) in the outpatient neurology clinic at the West Palm Beach Veterans Affairs Medical Center. This was a prospective, quality-improvement project to develop a clinical pharmacist-run neurology telephone clinic and evaluate pharmacologic and nonpharmacologic interventions in an effort to improve the quality of care for patients with PD. Additionally, the CPS conducted medication education groups to 24 patients with PD and their caregivers, if applicable, at this medical center with the purpose of promoting patient knowledge and medication awareness. Medication management was performed via telephone rather than face to face. Only patients with a concomitant mental health diagnosis for which they were receiving at least one psychotropic medication were included for individual visits due to the established scope of practice of the CPS being limited to mental health and primary care medications. Data collection included patient and clinic demographics as well as pharmacologic and nonpharmacologic interventions made for patients enrolled from January 6, 2017, through March 31, 2017. A total of 49 pharmacologic and nonpharmacologic interventions were made for 10 patients. We successfully implemented and evaluated a clinical pharmacist-run neurology telephone clinic for patients with PD. Expansion of this clinic to patients with various neurological disorders may improve access to care using an innovative method of medication management expertise by a CPS.
Coordinated scheduling for dynamic real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei
1994-01-01
In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.
The systematic evolution of a NASA software technology, Appendix C
NASA Technical Reports Server (NTRS)
Deregt, M. P.; Dulfer, J. E.
1972-01-01
A long range program is described whose ultimate purpose is to make possible the production of software in NASA within predictable schedule and budget constraints and with major characteristics such as size, run-time, and correctness predictable within reasonable tolerances. As part of the program a pilot NASA computer center will be chosen to apply software development and management techniques systematically and determine a set which is effective. The techniques will be developed by a Technology Group, which will guide the pilot project and be responsible for its success. The application of the technology will involve a sequence of NASA programming tasks graduated from simpler ones at first to complex systems in late phases of the project. The evaluation of the technology will be made by monitoring the operation of the software at the users' installations. In this way a coherent discipline for software design, production maintenance, and management will be evolved.
GIS embedded hydrological modeling: the SID&GRID project
NASA Astrophysics Data System (ADS)
Borsi, I.; Rossetto, R.; Schifani, C.
2012-04-01
The SID&GRID research project, started April 2010 and funded by Regione Toscana (Italy) under the POR FSE 2007-2013, aims to develop a Decision Support System (DSS) for water resource management and planning based on open source and public domain solutions. In order to quantitatively assess water availability in space and time and to support the planning decision processes, the SID&GRID solution consists of hydrological models (coupling 3D existing and newly developed surface- and ground-water and unsaturated zone modeling codes) embedded in a GIS interface, applications and library, where all the input and output data are managed by means of DataBase Management System (DBMS). A graphical user interface (GUI) to manage, analyze and run the SID&GRID hydrological models based on open source gvSIG GIS framework (Asociación gvSIG, 2011) and a Spatial Data Infrastructure to share and interoperate with distributed geographical data is being developed. Such a GUI is thought as a "master control panel" able to guide the user from pre-processing spatial and temporal data, running the hydrological models, and analyzing the outputs. To achieve the above-mentioned goals, the following codes have been selected and are being integrated: 1. Postgresql/PostGIS (PostGIS, 2011) for the Geo Data base Management System; 2. gvSIG with Sextante (Olaya, 2011) geo-algorithm library capabilities and Grass tools (GRASS Development Team, 2011) for the desktop GIS; 3. Geoserver and Geonetwork to share and discover spatial data on the web according to Open Geospatial Consortium; 4. new tools based on the Sextante GeoAlgorithm framework; 5. MODFLOW-2005 (Harbaugh, 2005) groundwater modeling code; 6. MODFLOW-LGR (Mehl and Hill 2005) for local grid refinement; 7. VSF (Thoms et al., 2006) for the variable saturated flow component; 8. new developed routines for overland flow; 9. new algorithms in Jython integrated in gvSIG to compute the net rainfall rate reaching the soil surface, as input for the unsaturated/saturated flow model. At this stage of the research (which will end April 2013), two primary components of the master control panel are being developed: i. a SID&GRID toolbar integrated into gvSIG map context; ii. a new Sextante set of geo-algorithm to pre- and post-process the spatial data to run the hydrological models. The groundwater part of the code has been fully integrated and tested and 3D visualization tools are being developed. The LGR capability has been extended to the 3D solution of the Richards' equation in order to solve in detail the unsaturated zone where required. To be updated about the project, please follow us at the website: http://ut11.isti.cnr.it/SIDGRID/
Porting and refurbishment of the WSS TNG control software
NASA Astrophysics Data System (ADS)
Caproni, Alessandro; Zacchei, Andrea; Vuerli, Claudio; Pucillo, Mauro
2004-09-01
The Workstation Software Sytem (WSS) is the high level control software of the Italian Galileo Galilei Telescope settled in La Palma Canary Island developed at the beginning of '90 for HP-UX workstations. WSS may be seen as a middle layer software system that manages the communications between the real time systems (VME), different workstations and high level applications providing a uniform distributed environment. The project to port the control software from the HP workstation to Linux environment started at the end of 2001. It is aimed to refurbish the control software introducing some of the new software technologies and languages, available for free in the Linux operating system. The project was realized by gradually substituting each HP workstation with a Linux PC with the goal to avoid main changes in the original software running under HP-UX. Three main phases characterized the project: creation of a simulated control room with several Linux PCs running WSS (to check all the functionality); insertion in the simulated control room of some HPs (to check the mixed environment); substitution of HP workstation in the real control room. From a software point of view, the project introduces some new technologies, like multi-threading, and the possibility to develop high level WSS applications with almost every programming language that implements the Berkley sockets. A library to develop java applications has also been created and tested.
[Information technology for the management of health care data: the EPIweb project].
Vittorini, Pierpaolo; Necozione, Stefano; di Orio, Ferdinando
2005-01-01
In the US, the Center for Disease Control and Prevention has produced has increased the permeability of the computer science technologies, in order to achieve a better and more efficient management of health care data. In this context, the present paper proposes a discussion regarding a web-based information system, called EPIweb. This system allows researchers to select the centers for the data entry, collect and elaborate health care data, produce technical reports and discuss results. Such a system aims to be easy-to-use, totally configurable and particularly suitable for the management of multicenter studies. The paper shows the EPIweb features, proposes a sample system run, and concludes with a discussion regarding both the advantages and the possible improvements and extensions.
System Enhancements for Mechanical Inspection Processes
NASA Technical Reports Server (NTRS)
Hawkins, Myers IV
2011-01-01
Quality inspection of parts is a major component to any project that requires hardware implementation. Keeping track of all of the inspection jobs is essential to having a smooth running process. By using HTML, the programming language ColdFusion, and the MySQL database, I created a web-based job management system for the 170 Mechanical Inspection Group that will replace the Microsoft Access based management system. This will improve the ways inspectors and the people awaiting inspection view and keep track of hardware as it is in the inspection process. In the end, the management system should be able to insert jobs into a queue, place jobs in and out of a bonded state, pre-release bonded jobs, and close out inspection jobs.
Resource Tracking Model Updates and Trade Studies
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Stambaugh, Imelda; Moore, Michael
2016-01-01
The Resource Tracking Model has been updated to capture system manager and project manager inputs. Both the Trick/General Use Nodal Network Solver Resource Tracking Model (RTM) simulator and the RTM mass balance spreadsheet have been revised to address inputs from system managers and to refine the way mass balance is illustrated. The revisions to the RTM included the addition of a Plasma Pyrolysis Assembly (PPA) to recover hydrogen from Sabatier Reactor methane, which was vented in the prior version of the RTM. The effect of the PPA on the overall balance of resources in an exploration vehicle is illustrated in the increased recycle of vehicle oxygen. Case studies have been run to show the relative effect of performance changes on vehicle resources.
Current research on aviation weather (bibliography)
NASA Technical Reports Server (NTRS)
Durham, D. E.; Frost, W.
1978-01-01
This bibliography of 326 readily usable references of basic and applied research programs related to the various areas of aviation meteorology was assembled. A literature search was conducted which surveyed the major abstract publications such as the International Aerospace Abstracts, the Meteorological and Geoastrophysical Abstracts, and the Scientific and Technical Aerospace Reports. In addition, NASA and DOT computer literature searches were run; and NASA, NOAA, and FAA research project managers were requested to provide writeups on their ongoing research.
Use of software engineering techniques in the design of the ALEPH data acquisition system
NASA Astrophysics Data System (ADS)
Charity, T.; McClatchey, R.; Harvey, J.
1987-08-01
The SASD methodology is being used to provide a rigorous design framework for various components of the ALEPH data acquisition system. The Entity-Relationship data model is used to describe the layout and configuration of the control and acquisition systems and detector components. State Transition Diagrams are used to specify control applications such as run control and resource management and Data Flow Diagrams assist in decomposing software tasks and defining interfaces between processes. These techniques encourage rigorous software design leading to enhanced functionality and reliability. Improved documentation and communication ensures continuity over the system life-cycle and simplifies project management.
Job Priorities on Peregrine | High-Performance Computing | NREL
allocation when run with qos=high. Requesting a Node Reservation If you are doing work that requires real scheduler more efficiently plan resources for larger jobs. When projects reach their allocation limit, jobs associated with those projects will run at very low priority, which will ensure that these jobs run only when
Developing a Long-term Monitoring Program with Undergraduate Students in Marine Sciences
NASA Astrophysics Data System (ADS)
Anders, T. M.; Boryta, M. D.
2015-12-01
A goal of our growing marine geoscience program at Mt. San Antonio College is to involve our students in all stages of developing and running an undergraduate research project. During the initial planning phase, students develop and test their proposals. Instructor-set parameters were chosen carefully to help guide students toward manageable projects but to not limit their creativity. Projects should focus on long-term monitoring of a coastal area in southern California. During the second phase, incoming students will critique the initial proposals, modify as necessary and continue to develop the project. We intend for data collection opportunities to grow from geological and oceanographic bases to eventually include other STEM topics in biology, chemistry, math and GIS. Questions we will address include: What makes this a good research project for a community college? What are the costs and time commitments involved? How will the project benefit students and society? Additionally we will share our initial results, challenges, and unexpected pitfalls and benefits.
Climate Change Impacts on US Agriculture and the Benefits of Greenhouse Gas Mitigation
NASA Astrophysics Data System (ADS)
Monier, E.; Sue Wing, I.; Stern, A.
2014-12-01
As contributors to the US EPA's Climate Impacts and Risk Assessment (CIRA) project, we present empirically-based projections of climate change impacts on the yields of five major US crops. Our analysis uses a 15-member ensemble of climate simulations using the MIT Integrated Global System Model (IGSM) linked to the NCAR Community Atmosphere Model (CAM), forced by 3 emissions scenarios (a "business as usual" reference scenario and two stabilization scenarios at 4.5W/m2 and 3.7 W/m2 by 2100), quantify the agricultural impacts avoided due to greenhouse gas emission reductions. Our innovation is the coupling of climate model outputs with empirical estimates of the long-run relationship between crop yields and temperature, precipitation and soil moisture derived from the co-variation between yields and weather across US counties over the last 50 years. Our identifying assumption is that since farmers' planting, management and harvesting decisions are based on land quality and expectations of weather, yields and meteorological variables share a long-run equilibrium relationship. In any given year, weather shocks cause yields to diverge from their expected long-run values, prompting farmers to revise their long-run expectations. We specify a dynamic panel error correction model (ECM) that statistically distinguishes these two processes. The ECM is estimated for maize, wheat, soybeans, sorghum and cotton using longitudinal data on production and harvested area for ~1,100 counties from 1948-2010, in conjunction with spatial fields of 3-hourly temperature, precipitation and soil moisture from the Global Land Data Assimilation System (GLDAS) forcing and output files, binned into annual counts of exposure over the growing season and mapped to county centroids. For scenarios of future warming the identical method was used to calculate counties' current (1986-2010) and future (2036-65 and 2086-2110) distributions of simulated 3-hourly growing season temperature, precipitation and soil moisture. Finally, we combine these variables with the fitted long-run response to obtain county-level yields under current average climate and projected future climate under our three warming scenarios. We close our presentation with a discussion of the implications for mitigation and adaptation decisions.
HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.
2017-10-01
PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.
Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herner, K.; Alba Hernandex, A. F.; Bhat, S.
The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasinglymore » complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specic third-party Certicate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.« less
Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.
2017-10-01
The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specific third-party Certificate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
SADE is a software package for rapidly assembling analytic pipelines to manipulate data. The packages consists of the engine that manages the data and coordinates the movement of data between the tasks performing a function? a set of core libraries consisting of plugins that perform common tasks? and a framework to extend the system supporting the development of new plugins. Currently through configuration files, a pipeline can be defined that maps the routing of data through a series of plugins. Pipelines can be run in a batch mode or can process streaming data? they can be executed from the commandmore » line or run through a Windows background service. There currently exists over a hundred plugins, over fifty pipeline configurations? and the software is now being used by about a half-dozen projects.« less
GridPP - Preparing for LHC Run 2 and the Wider Context
NASA Astrophysics Data System (ADS)
Coles, Jeremy
2015-12-01
This paper elaborates upon the operational status and directions within the UK Computing for Particle Physics (GridPP) project as it approaches LHC Run 2. It details the pressures that have been gradually reshaping the deployed hardware and middleware environments at GridPP sites - from the increasing adoption of larger multicore nodes to the move towards alternative batch systems and cloud alternatives - as well as changes being driven by funding considerations. The paper highlights work being done with non-LHC communities and describes some of the early outcomes of adopting a generic DIRAC based job submission and management framework. The paper presents results from an analysis of how GridPP effort is distributed across various deployment and operations tasks and how this may be used to target further improvements in efficiency.
2014-12-11
CAPE CANAVERAL, Fla. – Engineers and controllers in a mobile control room prepare for flight number 15 of NASA's Project Morpheus prototype lander at the north end of the Shuttle Landing Facility, or SLF, at NASA’s Kennedy Space Center in Florida. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
2014-12-11
CAPE CANAVERAL, Fla. – Engineers and technicians prepare the launch pad for NASA's Project Morpheus prototype lander at the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. Morpheus is being prepared for free flight test number 15 at the SLF. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
2014-12-10
CAPE CANAVERAL, Fla. – NASA's Project Morpheus prototype lander is being transported from a hangar at the Shuttle Landing Facility, or SLF, for free flight test number 15 at the north end of the SLF at NASA’s Kennedy Space Center in Florida. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
2014-12-10
CAPE CANAVERAL, Fla. – NASA's Project Morpheus prototype lander is being lowered by crane onto a launch pad at the north end of the Shuttle Landing Facility in preparation for free flight test number 15 at NASA’s Kennedy Space Center in Florida. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
2014-12-11
CAPE CANAVERAL, Fla. – Engineers and technicians prepare NASA's Project Morpheus prototype lander for free flight test number 15 on a launch pad at the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. Morpheus is being lowered by crane onto the launch pad. The lander will take off from the ground over a flame trench and use its autonomous landing and hazard avoidance technology, or ALHAT sensors, to survey the hazard field to determine safe landing sites. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, which are green propellants. These new capabilities could be used in future efforts to deliver cargo to planetary surfaces. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Jim Grossmann
The current status and portability of our sequence handling software.
Staden, R
1986-01-01
I describe the current status of our sequence analysis software. The package contains a comprehensive suite of programs for managing large shotgun sequencing projects, a program containing 61 functions for analysing single sequences and a program for comparing pairs of sequences for similarity. The programs that have been described before have been improved by the addition of new functions and by being made very much easier to use. The major interactive programs have 125 pages of online help available from within them. Several new programs are described including screen editing of aligned gel readings for shotgun sequencing projects; a method to highlight errors in aligned gel readings, new methods for searching for putative signals in sequences. We use the programs on a VAX computer but the whole package has been rewritten to make it easy to transport it to other machines. I believe the programs will now run on any machine with a FORTRAN77 compiler and sufficient memory. We are currently putting the programs onto an IBM PC XT/AT and another micro running under UNIX. PMID:3511446
Biosensors for EVA: Muscle Oxygen and pH During Walking, Running and Simulated Reduced Gravity
NASA Technical Reports Server (NTRS)
Lee, S. M. C.; Ellerby, G.; Scott, P.; Stroud, L.; Norcross, J.; Pesholov, B.; Zou, F.; Gernhardt, M.; Soller, B.
2009-01-01
During lunar excursions in the EVA suit, real-time measurement of metabolic rate is required to manage consumables and guide activities to ensure safe return to the base. Metabolic rate, or oxygen consumption (VO2), is normally measured from pulmonary parameters but cannot be determined with standard techniques in the oxygen-rich environment of a spacesuit. Our group developed novel near infrared spectroscopic (NIRS) methods to calculate muscle oxygen saturation (SmO2), hematocrit, and pH, and we recently demonstrated that we can use our NIRS sensor to measure VO2 on the leg during cycling. Our NSBRI-funded project is looking to extend this methodology to examine activities which more appropriately represent EVA activities, such as walking and running and to better understand factors that determine the metabolic cost of exercise in both normal and lunar gravity. Our 4 year project specifically addresses risk: ExMC 4.18: Lack of adequate biomedical monitoring capability for Constellation EVA Suits and EPSP risk: Risk of compromised EVA performance and crew health due to inadequate EVA suit systems.
NASA Astrophysics Data System (ADS)
Garcia-Cuerva, Laura; Berglund, Emily Zechman; Rivers, Louie
2018-04-01
Increasing urbanization augments impervious surface area, which results in increased run off volumes and peak flows. Green Infrastructure (GI) approaches are a decentralized alternative for sustainable urban stormwater and provide an array of ecosystem services and foster community building by enhancing neighborhood aesthetics, increasing property value, and providing shared green spaces. While projects involving sustainability concepts and environmental design are favored in privileged communities, marginalized communities have historically been located in areas that suffer from environmental degradation. Underprivileged communities typically do not receive as many social and environmental services as advantaged communities. This research explores GI-based management strategies that are evaluated at the watershed scale to improve hydrological performance by mitigating storm water run off volumes and peak flows. GI deployment strategies are developed to address environmental justice issues by prioritizing placement in communities that are underprivileged and locations with high outreach potential. A hydrologic/hydraulic stormwater model is developed using the Storm Water Management Model (SWMM 5.1) to simulate the impacts of alternative management strategies. Management scenarios include the implementation of rain water harvesting in private households, the decentralized implementation of bioretention cells in private households, the centralized implementation of bioretention cells in municipally owned vacant land, and combinations of those strategies. Realities of implementing GI on private and public lands are taken into account to simulate various levels of coverage and routing for bioretention cell scenarios. The effects of these strategies are measured by the volumetric reduction of run off and reduction in peak flow; social benefits are not evaluated. This approach is applied in an underprivileged community within the Walnut Creek Watershed in Raleigh, North Carolina.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia J.; Gittinger, Jaxon; Hunt, Warren L.
Slycat™ is a web-based system for performing data analysis and visualization of potentially large quantities of remote, high-dimensional data. Slycat™ specializes in working with ensemble data. An ensemble is a group of related data sets, which typically consists of a set of simulation runs exploring the same problem space. An ensemble can be thought of as a set of samples within a multi-variate domain, where each sample is a vector whose value defines a point in high-dimensional space. To understand and describe the underlying problem being modeled in the simulations, ensemble analysis looks for shared behaviors and common features acrossmore » the group of runs. Additionally, ensemble analysis tries to quantify differences found in any members that deviate from the rest of the group. The Slycat™ system integrates data management, scalable analysis, and visualization. Results are viewed remotely on a user’s desktop via commodity web clients using a multi-tiered hierarchy of computation and data storage, as shown in Figure 1. Our goal is to operate on data as close to the source as possible, thereby reducing time and storage costs associated with data movement. Consequently, we are working to develop parallel analysis capabilities that operate on High Performance Computing (HPC) platforms, to explore approaches for reducing data size, and to implement strategies for staging computation across the Slycat™ hierarchy. Within Slycat™, data and visual analysis are organized around projects, which are shared by a project team. Project members are explicitly added, each with a designated set of permissions. Although users sign-in to access Slycat™, individual accounts are not maintained. Instead, authentication is used to determine project access. Within projects, Slycat™ models capture analysis results and enable data exploration through various visual representations. Although for scientists each simulation run is a model of real-world phenomena given certain conditions, we use the term model to refer to our modeling of the ensemble data, not the physics. Different model types often provide complementary perspectives on data features when analyzing the same data set. Each model visualizes data at several levels of abstraction, allowing the user to range from viewing the ensemble holistically to accessing numeric parameter values for a single run. Bookmarks provide a mechanism for sharing results, enabling interesting model states to be labeled and saved.« less
NASA Astrophysics Data System (ADS)
Baker, B.; Ferschweiler, K.; Bachelet, D. M.; Sleeter, B. M.
2016-12-01
California's geographic location, topographic complexity and latitudinal climatic gradient give rise to great biological and ecological diversity. However, increased land use pressure, altered seasonal weather patterns, and changes in temperature and precipitation regimes are having pronounced effects on ecosystems and the multitude of services they provide for an increasing population. As a result, natural resource managers are faced with formidable challenges to maintain these critical services. The goals of this project were to better understand how projected 21st century climate and land-use change scenarios may alter ecosystem dynamics, the spatial distribution of various vegetation types and land-use patterns, and to provide a coarse scale "triage map" of where land managers may want to concentrate efforts to reduce ecological stress in order to mitigate the potential impacts of a changing climate. We used the MC2 dynamic global vegetation model and the LUCAS state-and-transition simulation model to simulate the potential effects of future climate and land-use change on ecological processes for the state of California. Historical climate data were obtained from the PRISM dataset and nine CMIP5 climate models were run for the RCP 8.5 scenario. Climate projections were combined with a business-as-usual land-use scenario based on local-scale land use histories. For ease of discussion, results from five simulation runs (historic, hot-dry, hot-wet, warm-dry, and warm-wet) are presented. Results showed large changes in the extent of urban and agricultural lands. In addition, several simulated potential vegetation types persisted in situ under all four future scenarios, although alterations in total area, total ecosystem carbon, and forest vigor (NPP/LAI) were noted. As might be expected, the majority of the forested types that persisted occurred on public lands. However, more than 78% of the simulated subtropical mixed forest and 26% of temperate evergreen needleleaf forest types persisted on private lands under all four future scenarios. Result suggest that building collaborations across management borders could be valuable tool to guide natural resource management actions into the future.
Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager
Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.
2012-01-01
GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.
2007-06-26
Stennis Space Center engineers are preparing to conduct water tests on an updated version of the scissors duct component of the J-2X engine. Measuring about 2 feet long and about 8 inches in diameter, the duct on the J-2X predecessor, the J-2, connected its fuel turbo pumps to the flight vehicle's upper stage run tanks. According to NASA's J-2X project manager at SSC, Gary Benton, the water tests should establish the limits of the duct's ability to withstand vibration.
SUPERTANK Laboratory Data Collection Project. Volume 1. Main Text
1994-01-01
gauge elec- tronics housing and sensing wire were pre- drilled with 1/8-in.- (3.2-mm-) diameter holes spaced every 2 in. (5.08 cm). This support rod...wave gauges and current meters were sampled at 16 Hz and other instruments were sampled at 10 Hz, shorter runs gave data files that were manageable in...Chapter 1 Introduction to SUPERTANK Figure 1-1. Wide-area view of LWT channel and control room during SUPERTANK (capacitance wave gauges in foreground
Open Software Tools Applied to Jordan's National Multi-Agent Water Management Model
NASA Astrophysics Data System (ADS)
Knox, Stephen; Meier, Philipp; Harou, Julien; Yoon, Jim; Selby, Philip; Lachaut, Thibaut; Klassert, Christian; Avisse, Nicolas; Khadem, Majed; Tilmant, Amaury; Gorelick, Steven
2016-04-01
Jordan is the fourth most water scarce country in the world, where demand exceeds supply in a politically and demographically unstable context. The Jordan Water Project (JWP) aims to perform policy evaluation by modelling the hydrology, economics, and governance of Jordan's water resource system. The multidisciplinary nature of the project requires a modelling software system capable of integrating submodels from multiple disciplines into a single decision making process and communicating results to stakeholders. This requires a tool for building an integrated model and a system where diverse data sets can be managed and visualised. The integrated Jordan model is built using Pynsim, an open-source multi-agent simulation framework implemented in Python. Pynsim operates on network structures of nodes and links and supports institutional hierarchies, where an institution represents a grouping of nodes, links or other institutions. At each time step, code within each node, link and institution can executed independently, allowing for their fully autonomous behaviour. Additionally, engines (sub-models) perform actions over the entire network or on a subset of the network, such as taking a decision on a set of nodes. Pynsim is modular in design, allowing distinct modules to be modified easily without affecting others. Data management and visualisation is performed using Hydra (www.hydraplatform.org), an open software platform allowing users to manage network structure and data. The Hydra data manager connects to Pynsim, providing necessary input parameters for the integrated model. By providing a high-level portal to the model, Hydra removes a barrier between the users of the model (researchers, stakeholders, planners etc) and the model itself, allowing them to manage data, run the model and visualise results all through a single user interface. Pynsim's ability to represent institutional hierarchies, inter-network communication and the separation of node, link and institutional logic from higher level processes (engine) suit JWP's requirements. The use of Hydra Platform and Pynsim helps make complex customised models such as the JWP model easier to run and manage with international groups of researchers.
Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik; Malisoux, Laurent; Nielsen, Rasmus Oestergaard
2017-11-06
Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running training, the runners' running experience and pace abilities can be used as estimates for load capacity. Since no evidence-based knowledge exist of how to plan appropriate half-marathon running schedules considering the level of running experience and running pace, the aim of ProjectRun21 is to investigate the association between running experience or running pace and the risk of running-related injury. Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one of three half-marathon running schedules developed for the study. Running data will be collected objectively by GPS. Injury will be based on the consensus-based time loss definition by Yamato et al.: "Running-related (training or competition) musculoskeletal pain in the lower limbs that causes a restriction on or stoppage of running (distance, speed, duration, or training) for at least 7 days or 3 consecutive scheduled training sessions, or that requires the runner to consult a physician or other health professional". Running experience and running pace will be included as primary exposures, while the exposure to running is pre-fixed in the running schedules and thereby conditioned by design. Time-to-event models will be used for analytical purposes. ProjectRun21 will examine if particular subgroups of runners with certain running experiences and running paces seem to sustain more running-related injuries compared with other subgroups of runners. This will enable sport coaches, physiotherapists as well as the runners to evaluate their injury risk of taking up a 14-week running schedule for half-marathon.
NASA Astrophysics Data System (ADS)
Rauser, Florian; Vamborg, Freja
2016-04-01
The interdisciplinary project on High Definition Clouds and Precipitation for advancing climate prediction HD(CP)2 (hdcp2.eu) is an example for the trend in fundamental research in Europe to increasingly focus on large national and international research programs that require strong scientific coordination. The current system has traditionally been host-based: project coordination activities and funding is placed at the host institute of the central lead PI of the project. This approach is simple and has the advantage of strong collaboration between project coordinator and lead PI, while exhibiting a list of strong, inherent disadvantages that are also mentioned in this session's description: no community best practice development, lack of integration between similar projects, inefficient methodology development and usage, and finally poor career development opportunities for the coordinators. Project coordinators often leave the project before it is finalized, leaving some of the fundamentally important closing processes to the PIs. This systematically prevents the creation of professional science management expertise within academia, which leads to an automatic imbalance that hinders the outcome of large research programs to help future funding decisions. Project coordinators in academia often do not work in a professional project office environment that could distribute activities and use professional tools and methods between different projects. Instead, every new project manager has to focus on methodological work anew (communication infrastructure, meetings, reporting), even though the technological needs of large research projects are similar. This decreases the efficiency of the coordination and leads to funding that is effectively misallocated. We propose to challenge this system by creating a permanent, virtual "Centre for Earth System Science Management CESSMA" (cessma.com), and changing the approach from host- based to centre-based. This should complement the current system, by creating permanent, sustained options for interactions between large research projects in similar fields. In the long run such a centre might improve on the host-based system because the centre-based solution allows multiple projects to be coordinated in conjunction by experienced science managers, using overlap in meeting organization, reporting, infrastructure, travel and so on. To still maintain close cooperation between project managers and lead PIs, we envision a virtual centre that creates extensive collaborative opportunities by organizing yearly retreats, a shared technical data base, et cetera. As "CESSMA" is work in progress (we have applied for funding for 2016-18), we would like to use this opportunity to discuss chances, potential problems, experiences and options for this attempt to institutionalise the very reason for this session: improved, coordinated, effective science coordination; and to create a central focal point for public / academia interactions.
Making real options really work.
van Putten, Alexander B; MacMillan, Ian C
2004-12-01
As a way to value growth opportunities, real options have had a difficult time catching on with managers. Many CFOs believe the method ensures the overvaluation of risky projects. This concern is legitimate, but abandoning real options as a valuation model isn't the solution. Companies that rely solely on discounted cash flow (DCF) analysis underestimate the value of their projects and may fail to invest enough in uncertain but highly promising opportunities. CFOs need not--and should not--choose one approach over the other. Far from being a replacement for DCF analysis, real options are an essential complement, and a project's total value should encompass both. DCF captures a base estimate of value; real options take into account the potential for big gains. This is not to say that there aren't problems with real options. As currently applied, they focus almost exclusively on the risks associated with revenues, ignoring the risks associated with a project's costs. It's also true that option valuations almost always ignore assets that an initial investment in a subsequently abandoned project will often leave the company. In this article, the authors present a simple formula for combining DCF and option valuations that addresses these two problems. Using an integrated approach, managers will, in the long run, select better projects than their more timid competitors while keeping risk under control. Thus, they will outperform their rivals in both the product and the capital markets.
Hildebrandt, Helmut; Schmitt, Gwendolyn; Roth, Monika; Stunder, Brigitte
2011-01-01
The regional integrated care model "Gesundes Kinzigtal" pursues the idea of integrated health care with special focus on increasing the health gain of the served population. Physicians (general practitioners) and psychotherapists, physiotherapists, hospitals, nursing services, non-profit associations, fitness centers, and health insurance companies work closely together with a regional management company and its programs on prevention and care coordination and enhancement. The 10 year-project is run by a company that was founded by the physician network "MQNK" and "OptiMedis AG", a corporation with public health background specialising in integrated health care. The aim of this project is to enhance prevention and quality of health care for a whole region in a sustainable way, and to decrease costs of care. The article describes the special funding model of the project, the engagement of patients, and the different health and prevention programmes. The programmes and projects are developed, implemented, and evaluated by multidisciplinary teams. Copyright © 2011. Published by Elsevier GmbH.
Intelligent Shuttle Management and Routing Algorithm
NASA Astrophysics Data System (ADS)
Thomas, Toshen M.; Subashanthini, S.
2017-11-01
Nowadays, most of the big Universities and campuses have Shuttle cabs running in them to cater the transportational needs of the students and faculties. While some shuttle services ask for a meagre sum to be paid for the usage, no digital payment system is onboard these vehicles to go truly cashless. Even more troublesome is the fact that sometimes during the day, some of these cabs run with bare number of passengers, which can result in unwanted budget loss to the shuttle operator. The main purpose of this paper is to create a system with two types of applications: A web portal and an Android app, to digitize the Shuttle cab industry. This system can be used for digital cashless payment feature, tracking passengers, tracking cabs and more importantly, manage the number of shuttle cabs in every route to maximize profit. This project is built upon an ASP.NET website connected to a cloud service along with an Android app that tracks and reads the passengers ID using an attached barcode reader along with the current GPS coordinates, and sends these data to the cloud for processing using the phone’s internet connectivity.
Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.
Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450
NASA Astrophysics Data System (ADS)
Niemand, C.; Kuhn, K.; Schwarze, R.
2010-12-01
SHARP is a European INTERREG IVc Program. It focuses on the exchange of innovative technologies to protect groundwater resources for future generations by considering the climate change and the different geological and geographical conditions. Regions involved are Austria, United Kingdom, Poland, Italy, Macedonia, Malta, Greece and Germany. They will exchange practical know-how and also determine know-how demands concerning SHARP’s key contents: general groundwater management tools, artificial groundwater recharge technologies, groundwater monitoring systems, strategic use of groundwater resources for drinking water, irrigation and industry, techniques to save water quality and quantity, drinking water safety plans, risk management tools and water balance models. SHARP Outputs & results will influence the regional policy in the frame of sustainable groundwater management to save and improve the quality and quantity of groundwater reservoirs for future generations. The main focus of the Saxon State Office for Environment, Agriculture and Landscape in this project is the enhancement and purposive use of water balance models. Already since 1992 scientists compare different existing water balance models on different scales and coupled with groundwater models. For example in the KLIWEP (Assessment of Impacts of Climate Change Projections on Water and Matter Balance for the Catchment of River Parthe in Saxony) project the coupled model WaSiM-ETH - PCGEOFIM® has been used to study the impact of climate change on water balance and water supplies. The project KliWES (Assessment of the Impacts of Climate Change Projections on Water and Matter Balance for Catchment Areas in Saxony) still running, comprises studies of fundamental effects of climate change on catchments in Saxony. Project objective is to assess Saxon catchments according to the vulnerability of their water resources towards climate change projections in order to derive region-specific recommendations for management actions. The model comparisons within reference areas showed significant differences in outcome. The values of water balance components calculated with different models partially fluctuate by a multiple of their value. The SHARP project was prepared in several previous projects that were testing suitable water balance models and is now able to assist the knowledge transfer.
Climate@Home: Crowdsourcing Climate Change Research
NASA Astrophysics Data System (ADS)
Xu, C.; Yang, C.; Li, J.; Sun, M.; Bambacus, M.
2011-12-01
Climate change deeply impacts human wellbeing. Significant amounts of resources have been invested in building super-computers that are capable of running advanced climate models, which help scientists understand climate change mechanisms, and predict its trend. Although climate change influences all human beings, the general public is largely excluded from the research. On the other hand, scientists are eagerly seeking communication mediums for effectively enlightening the public on climate change and its consequences. The Climate@Home project is devoted to connect the two ends with an innovative solution: crowdsourcing climate computing to the general public by harvesting volunteered computing resources from the participants. A distributed web-based computing platform will be built to support climate computing, and the general public can 'plug-in' their personal computers to participate in the research. People contribute the spare computing power of their computers to run a computer model, which is used by scientists to predict climate change. Traditionally, only super-computers could handle such a large computing processing load. By orchestrating massive amounts of personal computers to perform atomized data processing tasks, investments on new super-computers, energy consumed by super-computers, and carbon release from super-computers are reduced. Meanwhile, the platform forms a social network of climate researchers and the general public, which may be leveraged to raise climate awareness among the participants. A portal is to be built as the gateway to the climate@home project. Three types of roles and the corresponding functionalities are designed and supported. The end users include the citizen participants, climate scientists, and project managers. Citizen participants connect their computing resources to the platform by downloading and installing a computing engine on their personal computers. Computer climate models are defined at the server side. Climate scientists configure computer model parameters through the portal user interface. After model configuration, scientists then launch the computing task. Next, data is atomized and distributed to computing engines that are running on citizen participants' computers. Scientists will receive notifications on the completion of computing tasks, and examine modeling results via visualization modules of the portal. Computing tasks, computing resources, and participants are managed by project managers via portal tools. A portal prototype has been built for proof of concept. Three forums have been setup for different groups of users to share information on science aspect, technology aspect, and educational outreach aspect. A facebook account has been setup to distribute messages via the most popular social networking platform. New treads are synchronized from the forums to facebook. A mapping tool displays geographic locations of the participants and the status of tasks on each client node. A group of users have been invited to test functions such as forums, blogs, and computing resource monitoring.
Dambach, Peter; Traoré, Issouf; Kaiser, Achim; Sié, Ali; Sauerborn, Rainer; Becker, Norbert
2016-09-29
Recent malaria control and elimination attempts show remarkable success in several parts of sub-Saharan Africa. Vector control via larval source management represents a new and to date underrepresented approach in low income countries to further reduce malaria transmission. Although the positive impact of such campaigns on malaria incidence has been researched, there is a lack of data on which prerequisites are needed for implementing such programs on a routine basis on large scale. Our objectives are to point out important steps in implementing an anti-malaria larviciding campaign in a resource and infrastructure restraint setting and share the lessons learned from our experience during a three-year intervention study in rural Burkina Faso. We describe the approaches we followed and the challenges that have been encountered during the EMIRA project, a three-year study on the impact of environmental larviciding on vector ecology and human health. An inventory of all performed work packages and associated problems and peculiarities was assembled. Key to the successful implementation of the larviciding program within a health district was the support and infrastructure from the local research center run by the government. This included availability of trained scientific personnel for local project management, data collection and analysis by medical personnel, entomologists and demographers and teams of fieldworkers for the larviciding intervention. A detailed a priori assessment of the environment and vector breeding site ecology was essential to calculate personnel requirements and the need for larvicide and application apparel. In our case of a three-year project, solid funding for the whole duration was an important issue, which restricted the number of possible donors. We found the acquisition of qualified field personnel in fair numbers not to be always easy and training in application techniques and basic entomologic knowledge required several weeks of theoretical and practical formation. A further crucial point was to establish an effective quality control system that ensured the timely verification of larviciding success and facilitated in time data handling. While the experiences of running a larviciding campaign may vary globally, the experiences gained and the methods used in the Nouna health district may be employed in similar settings. Our observations highlight important components and strategies that should be taken into account when planning and running a similar larviciding program against malaria in a resource limited setting. A strong local partnership, meticulous planning with the possibility of ad-hoc adaption of project components and a reliable source of funding turned out to be crucial factors to successfully accomplish such a project.
2010-01-01
Background Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. Results We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. Conclusions The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities. PMID:20482787
Tolopko, Andrew N; Sullivan, John P; Erickson, Sean D; Wrobel, David; Chiang, Su L; Rudnicki, Katrina; Rudnicki, Stewart; Nale, Jennifer; Selfors, Laura M; Greenhouse, Dara; Muhlich, Jeremy L; Shamu, Caroline E
2010-05-18
Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities.
Cost/CYP: a bottom line that helps keep CSM projects cost-efficient.
1985-01-01
In contraceptive social marketing (CSM), the objective is social good, but project managers also need to run a tight ship, trimming costs, allocating scarce funds, and monitoring their program's progress. 1 way CSM managers remain cost-conscious is through the concept of couple-years-of-protection (CYP). Devised 2 decades ago as an administrative tool to compare the effects of different contraceptive methods, CYP's uses have multiplied to include assessing program output and cost effectiveness. Some of the factors affecting cost/CYP are a project's age, sales volume, management efficiency, and product prices and line. These factors are interconnected. The cost/CYP figures given here do not include outlays for commodities. While the Agency for International Development's commodity costs alter slightly with each new purchase contrast, the agency reports that a condom costs about 4 cents (US), an oral contraceptive (OC) cycle about 12 cents, and a spermicidal tablet about 7 cents. CSM projects have relatively high start-up costs. Within a project's first 2 years, expenses must cover such marketing activities as research, packaging, warehousing, and heavy promotion. As a project ages, sales should grow, producing revenues that gradually amortize these initial costs. The Nepal CSM project provides an example of how cost/CYP can improve as a program ages. In 1978, the year sales began, the project's cost/CYP was about $84. For some time the project struggled to get its products to its target market and gradually overcome several major hurdles. The acquisition of jeeps eased distribution and, by adding another condom brand, sales were increased still more, bringing the cost/CYP down to $8.30 in 1981. With further sales increases and resulting revenues, the cost/CYP dropped to just over $7 in 1983. When the sales volume becomes large enough, CSM projects can achieve economies of scale, which greatly improves cost-efficiency. Fixed costs shrink as a proportion of total expenditures. Good project management goes hand-in-hand with increasing sales. Cost/CYP is a powerful tool, but some project strategies alter its meaning. Some projects have lowered net costs by selling products at high prices. This dilutes the social marketing credo of getting low-cost projects to those in need. When this occurs, cost/CYP undergoes an identity crisis, for it no longer measures a purely social objective.
Ecological Impacts of Revegetation and Management Practices of Ski Slopes in Northern Finland
NASA Astrophysics Data System (ADS)
Kangas, Katja; Tolvanen, Anne; Kälkäjä, Tarja; Siikamäki, Pirkko
2009-09-01
Outdoor recreation and nature-based tourism represent an increasingly intensive form of land use that has considerable impacts on native ecosystems. The aim of this paper is to investigate how revegetation and management of ski runs influence soil nutrients, vegetation characteristics, and the possible invasion of nonnative plant species used in revegetation into native ecosystems. A soil and vegetation survey at ski runs and nearby forests, and a factorial experiment simulating ski run construction and management (factors: soil removal, fertilization, and seed sowing) were conducted at Ruka ski resort, in northern Finland, during 2003-2008. According to the survey, management practices had caused considerable changes in the vegetation structure and increased soil nutrient concentrations, pH, and conductivity on the ski runs relative to nearby forests. Seed mixture species sown during the revegetation of ski runs had not spread to adjacent forests. The experimental study showed that the germination of seed mixture species was favored by treatments simulating the management of ski runs, but none of them could eventually establish in the study forest. As nutrient leaching causes both environmental deterioration and changes in vegetation structure, it may eventually pose a greater environmental risk than the spread of seed mixture species alone. Machine grading and fertilization, which have the most drastic effects on soils and vegetation, should, therefore, be minimized when constructing and managing ski runs.
Ecological impacts of revegetation and management practices of ski slopes in northern Finland.
Kangas, Katja; Tolvanen, Anne; Kälkäjä, Tarja; Siikamäki, Pirkko
2009-09-01
Outdoor recreation and nature-based tourism represent an increasingly intensive form of land use that has considerable impacts on native ecosystems. The aim of this paper is to investigate how revegetation and management of ski runs influence soil nutrients, vegetation characteristics, and the possible invasion of nonnative plant species used in revegetation into native ecosystems. A soil and vegetation survey at ski runs and nearby forests, and a factorial experiment simulating ski run construction and management (factors: soil removal, fertilization, and seed sowing) were conducted at Ruka ski resort, in northern Finland, during 2003-2008. According to the survey, management practices had caused considerable changes in the vegetation structure and increased soil nutrient concentrations, pH, and conductivity on the ski runs relative to nearby forests. Seed mixture species sown during the revegetation of ski runs had not spread to adjacent forests. The experimental study showed that the germination of seed mixture species was favored by treatments simulating the management of ski runs, but none of them could eventually establish in the study forest. As nutrient leaching causes both environmental deterioration and changes in vegetation structure, it may eventually pose a greater environmental risk than the spread of seed mixture species alone. Machine grading and fertilization, which have the most drastic effects on soils and vegetation, should, therefore, be minimized when constructing and managing ski runs.
Low dose tomographic fluoroscopy: 4D intervention guidance with running prior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, Barbara; Kuntz, Jan; Brehm, Marcus
Purpose: Today's standard imaging technique in interventional radiology is the single- or biplane x-ray fluoroscopy which delivers 2D projection images as a function of time (2D+T). This state-of-the-art technology, however, suffers from its projective nature and is limited by the superposition of the patient's anatomy. Temporally resolved tomographic volumes (3D+T) would significantly improve the visualization of complex structures. A continuous tomographic data acquisition, if carried out with today's technology, would yield an excessive patient dose. Recently the authors proposed a method that enables tomographic fluoroscopy at the same dose level as projective fluoroscopy which means that if scanning time ofmore » an intervention guided by projective fluoroscopy is the same as that of an intervention guided by tomographic fluoroscopy, almost the same dose is administered to the patient. The purpose of this work is to extend authors' previous work and allow for patient motion during the intervention.Methods: The authors propose the running prior technique for adaptation of a prior image. This adaptation is realized by a combination of registration and projection replacement. In a first step the prior is deformed to the current position via affine and deformable registration. Then the information from outdated projections is replaced by newly acquired projections using forward and backprojection steps. The thus adapted volume is the running prior. The proposed method is validated by simulated as well as measured data. To investigate motion during intervention a moving head phantom was simulated. Real in vivo data of a pig are acquired by a prototype CT system consisting of a flat detector and a continuously rotating clinical gantry.Results: With the running prior technique it is possible to correct for motion without additional dose. For an application in intervention guidance both steps of the running prior technique, registration and replacement, are necessary. Reconstructed volumes based on the running prior show high image quality without introducing new artifacts and the interventional materials are displayed at the correct position.Conclusions: The running prior improves the robustness of low dose 3D+T intervention guidance toward intended or unintended patient motion.« less
Integration of Titan supercomputer at OLCF with ATLAS Production System
NASA Astrophysics Data System (ADS)
Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Multi-Mission Automated Task Invocation Subsystem
NASA Technical Reports Server (NTRS)
Cheng, Cecilia S.; Patel, Rajesh R.; Sayfi, Elias M.; Lee, Hyun H.
2009-01-01
Multi-Mission Automated Task Invocation Subsystem (MATIS) is software that establishes a distributed data-processing framework for automated generation of instrument data products from a spacecraft mission. Each mission may set up a set of MATIS servers for processing its data products. MATIS embodies lessons learned in experience with prior instrument- data-product-generation software. MATIS is an event-driven workflow manager that interprets project-specific, user-defined rules for managing processes. It executes programs in response to specific events under specific conditions according to the rules. Because requirements of different missions are too diverse to be satisfied by one program, MATIS accommodates plug-in programs. MATIS is flexible in that users can control such processing parameters as how many pipelines to run and on which computing machines to run them. MATIS has a fail-safe capability. At each step, MATIS captures and retains pertinent information needed to complete the step and start the next step. In the event of a restart, this information is retrieved so that processing can be resumed appropriately. At this writing, it is planned to develop a graphical user interface (GUI) for monitoring and controlling a product generation engine in MATIS. The GUI would enable users to schedule multiple processes and manage the data products produced in the processes. Although MATIS was initially designed for instrument data product generation,
MineScan: non-image data monitoring and mining from imaging modalities
NASA Astrophysics Data System (ADS)
Zaidi, Shayan M.; Huff, Dov; Bhalodia, Pankit; Mongkolwat, Pattanasak; Channin, David S.
2003-05-01
This project is intended to capture and interactively display non-image information routinely generated by imaging modalities. This information relates to the device's performance of the individual procedures and is not necessarily available in other information streams such as DICOM headers. While originally intended for use in servicing the modalities, this information can also be presented to radiologists and administrators within the department for both micro- and macro-management purposes. This data can help hospital administrators and radiologists manage available resources and discover clues to indicate what modifications in hospital operations might significantly improve its ability to provide efficient patient care. Data is collected from a departmental CT scanner. The data consists of a running record of exams followed by a list of processing records logged over a 24-hour period. MineScan extracts information from these records and stores it into a database. A statistical program is run once a day to collect relevant metrics. MineScan can be accessed via a Web browser or through an advanced prototype PACS workstation. This information, if provided in real-time, can be used to manage operations in a busy department. Even when provided historically, the data can be used to assess current activity, analyze trends and plan future operations.
National Water Model assessment for water management needs over the Western United States.
NASA Astrophysics Data System (ADS)
Viterbo, F.; Thorstensen, A.; Cifelli, R.; Hughes, M.; Johnson, L.; Gochis, D.; Wood, A.; Nowak, K.; Dahm, K.
2017-12-01
The NOAA National Water Model (NWM) became operational in August 2016, providing the first ever, real-time distributed high-resolution forecasts for the continental United States. Since the model predictions occur at the CONUS scale, there is a need to evaluate the NWM in different regions to assess the wide variety and heterogeneity of hydrological processes that are included (e.g., snow melting, ice freezing, flash flooding events). In particular, to address water management needs in the western U.S., a collaborative project between the Bureau of Reclamation, NOAA, and NCAR is ongoing to assess the NWM performance for reservoir inflow forecasting needs and water management operations. In this work, the NWM is evaluated using different forecast ranges (short to medium) and retrospective historical runs forced by North American Land Data Assimilation System (NLDAS) analysis to assess the NWM skills over key headwaters watersheds in the western U.S. that are of interest to the Bureau of Reclamation. The streamflow results are analyzed and compared with the available observations at the gauge sites, evaluating different NWM operational versions together with the already existing local River Forecast Center forecasts. The NWM uncertainty is also considered, evaluating the propagation of the precipitation forcing uncertainties in the resulting hydrograph. In addition, the possible advantages of high-resolution distributed output variables (such as soil moisture, evapotranspiration fluxes) are investigated, to determine the utility of such information for water managers in terms of watershed characteristics in areas that traditionally have not had any forecast information. The results highlight the NWM's ability to provide high-resolution forecast information in space and time. As anticipated, the performance is best in regions that are dominated by natural flows and where the model has benefited from efforts toward parameter calibration. In highly regulated basins, the water management operations result in NWM overestimation of the peak flows and too fast recession curves. As a future project goal, some reforecasts will be run on target locations, ingesting water management information into the NWM and comparing the new results with the actual operational forecast.
Post-project appraisals in adaptive management of river channel restoration.
Downs, Peter W; Kondolf, G Mathias
2002-04-01
Post-project appraisals (PPAs) can evaluate river restoration schemes in relation to their compliance with design, their short-term performance attainment, and their longer-term geomorphological compatibility with the catchment hydrology and sediment transport processes. PPAs provide the basis for communicating the results of one restoration scheme to another, thereby improving future restoration designs. They also supply essential performance feedback needed for adaptive management, in which management actions are treated as experiments. PPAs allow river restoration success to be defined both in terms of the scheme attaining its performance objectives and in providing a significant learning experience. Different levels of investment in PPA, in terms of pre-project data and follow-up information, bring with them different degrees of understanding and tbus different abilities to gauge both types of success. We present four case studies to illustrate how the commitment to PPA has determined the understanding achieved in each case. In Moore's Gulch (California, USA), understanding was severely constrained by the lack of pre-project data and post-implementation monitoring. Pre-project data existed for the Kitswell Brook (Hertfordshire, UK), but the monitoring consisted only of one site visit and thus the understanding achieved is related primarily to design compliance issues. The monitoring undertaken for Deep Run (Maryland, USA) and the River Idle (Nottinghamshire, UK) enabled some understanding of the short-term performance of each scheme. The transferable understanding gained from each case study is used to develop an illustrative five-fold classification of geomorphological PPAs (full, medium-term, short-term, one-shot, and remains) according to their potential as learning experiences. The learning experience is central to adaptive management but rarely articulated in the literature. Here, we gauge the potential via superimposition onto a previous schematic representation of the adaptive management process by Haney and Power (1996). Using PPAs wisely can lead to cutting-edge, complex solutions to river restoration challenges.
Cornish, Flora; Ghosh, Riddhi
2007-01-01
Health promotion interventions with marginalised groups are increasingly expected to demonstrate genuine community participation in their design and delivery. However, ideals of egalitarian democratic participation are far removed from the starting point of the hierarchical and exploitative social relations that typically characterise marginalised communities. What scope is there for health promotion projects to implement ideals of community leadership within the realities of marginalisation and inequality? We examine how the Sonagachi Project, a successful sex-worker-led HIV prevention project in India, has engaged with the unequal social relations in which it is embedded. Our ethnographic study is based on observation of the Project's participatory activities and 39 interviews with a range of its stakeholders (including sex worker employees of the Project, non-sex-worker development professionals, brothel managers, sex workers' clients). The analysis shows that the project is deeply shaped by its relationships with non-sex-worker interest groups. In order to be permitted access to the red light district, it has had to accommodate the interests of local men's clubs and brothel managers. The economic and organisational capacity to run such a project has depended upon the direct input of development professionals and funding agencies. Thus, the 'community' that leads this project is much wider than a local grouping of marginalised sex workers. We argue that, given existing power relations, the engagement with other interest groups was necessary to the project's success. Moreover, as the project has developed, sex workers' interests and leadership have gained increasing prominence. We suggest that existing optimistic expectations of participation inhibit acknowledgement of the troubling work of balancing power relations. Rather than denying such power relations, projects should be expected to plan for them.
Integrating gender into natural resources management projects: USAID lessons learned.
1998-01-01
This article discusses USAID's lessons learned about integrating gender into natural resource management (NRM) projects in Peru, the Philippines, and Kenya. In Peru, USAID integrated women into a solid waste management project by lending money to invest in trash collection supplies. The loans allowed women to collect household waste, transfer it to a landfill, and provide additional sanitary disposal. The women were paid through direct fees from households and through service contracts with municipalities. In Mindanao, the Philippines, women were taught about the health impact of clean water and how to monitor water quality, including the monitoring of E. coli bacteria. Both men and women were taught soil conservation techniques for reducing the amount of silt running into the lake, which interferes with the generation of electricity and affects the health of everyone. The education helped women realize the importance of reducing silt and capitalized on their interest in protecting the health of their families. The women were thus willing to monitor the lake's water quality to determine if the conservation efforts were effective. In Kenya, USAID evaluated its Ecology, Community Organization, and Gender project in the Rift Valley, which helped resettle a landless community and helped with sustainable NRM. The evaluation revealed that women's relative bargaining power was less than men's. Organized capacity building that strengthened women's networks and improved their capacity to push issues onto the community agenda assured women a voice in setting the local NRM agenda.
Ward, R. E.; Purves, T.; Feldman, M.; Schiffman, R. M.; Barry, S.; Christner, M.; Kipa, G.; McCarthy, B. D.; Stiphout, R.
1991-01-01
The Care Windows development project demonstrated the feasibility of an approach designed to add the benefits of an event-driven, graphically-oriented user interface to an existing Medical Information Management System (MIMS) without overstepping economic and logistic constraints. The design solution selected for the Care Windows project incorporates three important design features: (1) the effective de-coupling of severs from requesters, permitting the use of an extensive pre-existing library of MIMS servers, (2) the off-loading of program control functions of the requesters to the workstation processor, reducing the load per transaction on central resources and permitting the use of object-oriented development environments available for microcomputers, (3) the selection of a low end, GUI-capable workstation consisting of a PC-compatible personal computer running Microsoft Windows 3.0, and (4) the development of a highly layered, modular workstation application, permitting the development of interchangeable modules to insure portability and adaptability. PMID:1807665
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olive, S.W.; Lamb, B.L.
This paper is an account of the process that evolved during acquisition of the license to operate the Terror Lake hydro-electric power project under the auspices of the Federal Energy Regulatory Commission (FERC). The Terror River is located on Kodiak Island in Alaska. The river is within the Kodiak National Wildlife Refuge; it supports excellent runs of several species of Pacific Salmon which are both commercially important and a prime source of nutrition for the Kodiak brown bear. This paper discusses both the fish and wildlife questions, but concentrates on instream uses and how protection of these uses was decided.more » In this focus the paper explains the FERC process, gives a history of the Terror Lake Project, and, ultimately, makes recommendations for improved management of controversies within the context of FERC licensing procedures. 64 references.« less
NASA Astrophysics Data System (ADS)
Yuan, J.; Kopp, R. E.
2017-12-01
Quantitative risk analysis of regional climate change is crucial for risk management and impact assessment of climate change. Two major challenges to assessing the risks of climate change are: CMIP5 model runs, which drive EURO-CODEX downscaling runs, do not cover the full range of uncertainty of future projections; Climate models may underestimate the probability of tail risks (i.e. extreme events). To overcome the difficulties, this study offers a viable avenue, where a set of probabilistic climate ensemble is generated using the Surrogate/Model Mixed Ensemble (SMME) method. The probabilistic ensembles for temperature and precipitation are used to assess the range of uncertainty covered by five bias-corrected simulations from the high-resolution (0.11º) EURO-CODEX database, which are selected by the PESETA (The Projection of Economic impacts of climate change in Sectors of the European Union based on bottom-up Analysis) III project. Results show that the distribution of SMME ensemble is notably wider than both distribution of raw ensemble of GCMs and the spread of the five EURO-CORDEX in RCP8.5. Tail risks are well presented by the SMME ensemble. Both SMME ensemble and EURO-CORDEX projections are aggregated to administrative level, and are integrated into impact functions of PESETA III to assess climate risks in Europe. To further evaluate the uncertainties introduced by the downscaling process, we compare the 5 runs from EURO-CORDEX with runs from the corresponding GCMs. Time series of regional mean, spatial patterns, and climate indices are examined for the future climate (2080-2099) deviating from the present climate (1981-2010). The downscaling processes do not appear to be trend-preserving, e.g. the increase in regional mean temperature from EURO-CORDEX is slower than that from the corresponding GCM. The spatial pattern comparison reveals that the differences between each pair of GCM and EURO-CORDEX are small in winter. In summer, the temperatures of EURO-CORDEX are generally lower than those of GCMs, while the drying trends in precipitation of EURO-CORDEX are smaller than those of GCMs. Climate indices are significantly affected by bias-correction and downscaling process. Our study provides valuable information for selecting climate indices in different regions over Europe.
Expert systems identify fossils and manage large paleontological databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beightol, D.S.; Conrad, M.A.
EXPAL is a computer program permitting creation and maintenance of comprehensive databases in marine paleontology. It is designed to assist specialists and non-specialists. EXPAL includes a powerful expert system based on the morphological descriptors specific to a given group of fossils. The expert system may be used, for example, to describe and automatically identify an unknown specimen. EXPAL was first applied to Dasycladales (Calcareous green algae). Projects are under way for corresponding expert systems and databases on planktonic foraminifers and calpionellids. EXPAL runs on an IBM XT or compatible microcomputer.
2009-01-30
tool written in Java to support the automated creation of simulated subnets. It can be run giving it a subnet, the number of hosts to create, the...network and can also be used to create subnets with specific profiles. Subnet Creator command line: > java –jar SubnetCreator.jar –j [path to client...command: > java –jar jss_client.jar com.mdacorporation.jndms.JSS.Client.JSSBatchClient [file] 5. Software: This is the output file that will store the
NASA Astrophysics Data System (ADS)
Mazurov, Alexander; Couturier, Ben; Popov, Dmitry; Farley, Nathanael
2017-10-01
Any time you modify an implementation within a program, change compiler version or operating system, you should also do regression testing. You can do regression testing by rerunning existing tests against the changes to determine whether this breaks anything that worked prior to the change and by writing new tests where necessary. At LHCb we have a huge codebase which is maintained by many people and can be run within different setups. Such situations lead to the crucial necessity to guide refactoring with a central profiling system that helps to run tests and find the impact of changes. In our work we present a software architecture and tools for running a profiling system. This system is responsible for systematically running regression tests, collecting and comparing results of these tests so changes between different setups can be observed and reported. The main feature of our solution is that it is based on a microservices architecture. Microservices break a large project into loosely coupled modules, which communicate with each other through simple APIs. Such modular architectural style helps us to avoid general pitfalls of monolithic architectures such as hard to understand a codebase as well as maintaining a large codebase and ineffective scalability. Our solution also allows to escape a complexity of microservices deployment process by using software containers and services management tools. Containers and service managers let us quickly deploy linked modules in development, production or in any other environments. Most of the developed modules are generic which means that the proposed architecture and tools can be used not only in LHCb but adopted for other experiments and companies.
Meals, quarters for 8,200 needed at peak in LNG project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aalund, L.R.
It has everything a real town has except women, children, schools, bars, and old people. It is the huge camp built at Ras Laffan, Qatar, on the shores of the Persian Gulf to lodge and feed over 5,000 workers as they build the first plant in the emirate for liquefying millions of tons of natural gas yearly. Japan`s Chiyoda Corp. is the top contractor for the Qatar Liquefied Gas Co. (QatarGas) project, which is owned by a Qatari, French, American, and Japanese consortium. As part of the plant construction contract, Chiyoda built the camp, which Teyseer Services Co., the Qatarmore » affiliate of the French company, Sodexho Alliance, now runs and maintains. Sodexho is the world`s largest catering/remote site management organization. It has had all its expertise in those fields put to the test for nearly 4 years supporting this world-scale LNG project which will be completed this summer. This project is described.« less
Master Middle Ware: A Tool to Integrate Water Resources and Fish Population Dynamics Models
NASA Astrophysics Data System (ADS)
Yi, S.; Sandoval Solis, S.; Thompson, L. C.; Kilduff, D. P.
2017-12-01
Linking models that investigate separate components of ecosystem processes has the potential to unify messages regarding management decisions by evaluating potential trade-offs in a cohesive framework. This project aimed to improve the ability of riparian resource managers to forecast future water availability conditions and resultant fish habitat suitability, in order to better inform their management decisions. To accomplish this goal, we developed a middleware tool that is capable of linking and overseeing the operations of two existing models, a water resource planning tool Water Evaluation and Planning (WEAP) model and a habitat-based fish population dynamics model (WEAPhish). First, we designed the Master Middle Ware (MMW) software in Visual Basic for Application® in one Excel® file that provided a familiar framework for both data input and output Second, MMW was used to link and jointly operate WEAP and WEAPhish, using Visual Basic Application (VBA) macros to implement system level calls to run the models. To demonstrate the utility of this approach, hydrological, biological, and middleware model components were developed for the Butte Creek basin. This tributary of the Sacramento River, California is managed for both hydropower and the persistence of a threatened population of spring-run Chinook salmon (Oncorhynchus tschawytscha). While we have demonstrated the use of MMW for a particular watershed and fish population, MMW can be customized for use with different rivers and fish populations, assuming basic data requirements are met. This model integration improves on ad hoc linkages for managing data transfer between software programs by providing a consistent, user-friendly, and familiar interface across different model implementations. Furthermore, the data-viewing capabilities of MMW facilitate the rapid interpretation of model results by hydrologists, fisheries biologists, and resource managers, in order to accelerate learning and management decision making.
Quality Assurance Project Plan for Citizen Science Projects
The Quality Assurance Project Plan is necessary for every project that collects or uses environmental data. It documents the project planning process and serves as a blueprint for how your project will run.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
Food waste in Central Europe - challenges and solutions
NASA Astrophysics Data System (ADS)
den Boer, Jan; Kobel, Przemysław; Dyjakon, Arkadiusz; Urbańska, Klaudia; Obersteiner, Gudrun; Hrad, Marlies; Schmied, Elisabeth; den Boer, Emilia
2017-11-01
Food waste is an important issue in the global economy. In the EU many activities aimed at this topic are carried out, however in Central Europe is still quite pristine. There is lack of reliable data on food waste quantities in this region, and not many preventive actions are taken. To improve this situation the STREFOWA (Strategies to Reduce and Manage Food Waste in Central Europe) was initiated. It is an international project (Austria, Czech Republic, Hungary, Italy, Poland), founded by the Interreg Central Europe programme, running from July 2016 to June 2019. Its main purpose is to provide solutions to prevent and manage food waste throughout the entire food supply chain. The results of STREFOWA will have positive economical, social and environmental impacts.
[Influence of the space layout of a surgical department on use efficiency].
Weiss, G; von Baer, R; Riedl, S
2002-02-01
There is a growing gap between the rapidly increasing diagnostic and therapeutic opportunities and the patient demands on one side and the continuously declining hospital budgets on the other side. This gap forces hospitals to search for rationalization potentials and ways to increase their efficiency. It is well known that the operating theatre unit is one of the most important internal cost factors. Many reorganization projects therefore focus on operating theatres. In Germany, several alternative operating room layouts have been developed in order to reduce running und building costs and to reach a high degree of flexibility in their everyday use by means of an improved design. This article analyses and compares the classic operating room and four alternative layouts intended to make them suitable for reaching the promised objectives and, especially, achieving an economically run business management. Furthermore, preferred layouts for certain types of operations are recommended.
GSDC: A Unique Data Center in Korea for HEP research
NASA Astrophysics Data System (ADS)
Ahn, Sang-Un
2017-04-01
Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) is a unique data center in South Korea established for promoting the fundamental research fields by supporting them with the expertise on Information and Communication Technology (ICT) and the infrastructure for High Performance Computing (HPC), High Throughput Computing (HTC) and Networking. GSDC has supported various research fields in South Korea dealing with the large scale of data, e.g. RENO experiment for neutrino research, LIGO experiment for gravitational wave detection, Genome sequencing project for bio-medical, and HEP experiments such as CDF at FNAL, Belle at KEK, and STAR at BNL. In particular, GSDC has run a Tier-1 center for ALICE experiment using the LHC at CERN since 2013. In this talk, we present the overview on computing infrastructure that GSDC runs for the research fields and we discuss on the data center infrastructure management system deployed at GSDC.
NASA Astrophysics Data System (ADS)
Vallot, Dorothée; Applegate, Patrick; Pettersson, Rickard
2013-04-01
Projecting future climate and ice sheet development requires sophisticated models and extensive field observations. Given the present state of our knowledge, it is very difficult to say what will happen with certainty. Despite the ongoing increase in atmospheric greenhouse gas concentrations, the possibility that a new ice sheet might form over Scandinavia in the far distant future cannot be excluded. The growth of a new Scandinavian Ice Sheet would have important consequences for buried nuclear waste repositories. The Greenland Analogue Project, initiated by the Swedish Nuclear Fuel and Waste Management Company (SKB), is working to assess the effects of a possible future ice sheet on groundwater flow by studying a constrained domain in Western Greenland by field measurements (including deep bedrock drilling in front of the ice sheet) combined with numerical modeling. To address the needs of the GAP project, we interpolated results from an ensemble of ice sheet model runs to the smaller and more finely resolved modeling domain used in the GAP project's hydrologic modeling. Three runs have been chosen with three fairly different positive degree-day factors among those that reproduced the modern ice margin at the borehole position. The interpolated results describe changes in hydrologically-relevant variables over two time periods, 115 ka to 80 ka, and 20 ka to 1 ka. In the first of these time periods, the ice margin advances over the model domain; in the second time period, the ice margin retreats over the model domain. The spatially-and temporally dependent variables that we treated include the ice thickness, basal melting rate, surface mass balance, basal temperature, basal thermal regime (frozen or thawed), surface temperature, and basal water pressure. The melt flux is also calculated.
The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms
Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie
2009-01-01
The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with the critical information they need to respond quickly and efficiently and to increase public safety and mitigate damage associated with powerful coastal storms. For instance, high resolution local models will predict detailed wave heights, breaking patterns, and current strengths for use in warning systems for harbor-mouth navigation and densely populated coastal regions where beach safety is threatened. The offline applications are intended to equip coastal managers with the information needed to manage and allocate their resources effectively to protect sections of coast that may be most vulnerable to future severe storms.
Developing a medical emergency team running sheet to improve clinical handoff and documentation.
Mardegan, Karen; Heland, Melodie; Whitelock, Tifany; Millar, Robert; Jones, Daryl
2013-12-01
During medical emergency team (MET) and cardiac arrest calls, a scribe usually records events on a running sheet. There is more agreement on what data should be recorded in cardiac arrest calls than for MET calls. In addition, handoff (handover) from ward staff to the arriving MET may be variable. In a quality improvement project, a novel MET running sheet was developed to document events and therapies administered during MET calls. Key characteristics of the form were improved form layout, increased space for event documentation, and prompts to assist handoff to the arriving MET using the Identity Situation, Background, Assessment, Request (ISBAR) format. Ward nurses commonly involved in MET activation were surveyed to assess their perceptions of the new MET running sheet. Files of 100 consecutive MET calls were reviewed to assess compliance. Of 109 nurses invited to complete the survey, 103 did so (94.5% response rate). Overall, 87 (84.5%) of the 103 respondents agreed or strongly agreed that the new MET running sheet was better than the previous form for documenting MET management, and 58 (57.4%) of 101 respondents agreed or strongly agreed that it assisted handoff. The form was completed in 91 of a sample of 100 consecutive MET calls. Areas of less complete documentation included aspects of the ISBAR handover to the arriving MET and notification of the next of kin and usual clinicians at the completion of the call. The MET running sheet, tailored to the clinical events that occur during episodes of MET review, may assist handoff from ward nurses to the arriving MET and event documentation.
Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.
2015-12-01
During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for how to tune the initial distribution of data in anticipation of how it will be used in Run-2 and beyond.
Children's Fitness. Managing a Running Program.
ERIC Educational Resources Information Center
Hinkle, J. Scott; Tuckman, Bruce W.
1987-01-01
A running program to increase the cardiovascular fitness levels of fourth-, fifth-, and sixth-grade children is described. Discussed are the running environment, implementation of a running program, feedback, and reinforcement. (MT)
Costa Rica regroups for sales kick-off.
1985-01-01
Cost Rica's contraceptive social marketing project is scheduled to be launched in March 1985. The project is run through a for-profit corporation, Asdecosta, which is owned by the Costa Rican International Planned Parenthood affiliate. Asdecosta was formed as a for-profit entity because Costa Rican law prohibits product sales by nonprofit groups. The US Agency for International Development (AID) will allocate US$1.2 million over a 5-year period, 1983-88. The project manager, Jorge Lopez, is an economist with considerable experience in marketing. The project has lined up a top national distributor, a packaging company, and an advertising agency for its 1st product, a condom manufactured in the US by Ansell. Asdecost's target market is projected to include 50,000-75,000 couples at its peak operating capacity. An estimated 65% of Costa Rican women have used a contraceptive method at some time. The condom, pill, and IUD are the most popular methods. Eventually, Asdecosta expects to expand its product line to include oral contraceptives. Another goal is to counter the high drop out rate among users of government and other family planning services.
The WorkQueue project - a task queue for the CMS workload management system
NASA Astrophysics Data System (ADS)
Ryu, S.; Wakefield, S.
2012-12-01
We present the development and first experience of a new component (termed WorkQueue) in the CMS workload management system. This component provides a link between a global request system (Request Manager) and agents (WMAgents) which process requests at compute and storage resources (known as sites). These requests typically consist of creation or processing of a data sample (possibly terabytes in size). Unlike the standard concept of a task queue, the WorkQueue does not contain fully resolved work units (known typically as jobs in HEP). This would require the WorkQueue to run computationally heavy algorithms that are better suited to run in the WMAgents. Instead the request specifies an algorithm that the WorkQueue uses to split the request into reasonable size chunks (known as elements). An advantage of performing lazy evaluation of an element is that expanding datasets can be accommodated by having job details resolved as late as possible. The WorkQueue architecture consists of a global WorkQueue which obtains requests from the request system, expands them and forms an element ordering based on the request priority. Each WMAgent contains a local WorkQueue which buffers work close to the agent, this overcomes temporary unavailability of the global WorkQueue and reduces latency for an agent to begin processing. Elements are pulled from the global WorkQueue to the local WorkQueue and into the WMAgent based on the estimate of the amount of work within the element and the resources available to the agent. WorkQueue is based on CouchDB, a document oriented NoSQL database. The WorkQueue uses the features of CouchDB (map/reduce views and bi-directional replication between distributed instances) to provide a scalable distributed system for managing large queues of work. The project described here represents an improvement over the old approach to workload management in CMS which involved individual operators feeding requests into agents. This new approach allows for a system where individual WMAgents are transient and can be added or removed from the system as needed.
Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture
NASA Astrophysics Data System (ADS)
Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel
2003-11-01
Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.
Simulation environment and graphical visualization environment: a COPD use-case.
Huertas-Migueláñez, Mercedes; Mora, Daniel; Cano, Isaac; Maier, Dieter; Gomez-Cabrero, David; Lluch-Ariet, Magí; Miralles, Felip
2014-11-28
Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios.
NASA Astrophysics Data System (ADS)
Crossman, J.; Futter, M. N.; Palmer, M.; Whitehead, P. G.; Baulch, H. M.; Woods, D.; Jin, L.; Oni, S. K.; Dillon, P. J.
2016-09-01
Uncertainty surrounding future climate makes it difficult to have confidence that current nutrient management strategies will remain effective. This study used monitoring and modeling to assess current effectiveness (% phosphorus reduction) and resilience (defined as continued effectiveness under a changing climate) of best management practices (BMPs) within five catchments of the Lake Simcoe watershed, Ontario. The Integrated Catchment Phosphorus model (INCA-P) was used, and monitoring data were used to calibrate and validate a series of management scenarios. To assess current BMP effectiveness, models were run over a baseline period 1985-2014 with and without management scenarios. Climate simulations were run (2070-2099), and BMP resilience was calculated as the percent change in effectiveness between the baseline and future period. Results demonstrated that livestock removal from water courses was the most effective BMP, while manure storage adjustments were the least. Effectiveness varied between catchments, influenced by the dominant hydrological and nutrient transport pathways. Resilience of individual BMPs was associated with catchment sensitivity to climate change. BMPs were most resilient in catchments with high soil water storage capacity and small projected changes in frozen-water availability and in soil moisture deficits. Conversely, BMPs were less resilient in catchments with larger changes in spring melt magnitude and in overland flow proportions. Results indicated that BMPs implemented are not always those most suited to catchment flow pathways, and a more site-specific approach would enhance prospects for maintaining P reduction targets. Furthermore, BMP resilience to climate change can be predicted from catchment physical properties and present-day hydrochemical sensitivity to climate forcing.
Qi, Yi; Padiath, Ameena; Zhao, Qun; Yu, Lei
2016-10-01
The Motor Vehicle Emission Simulator (MOVES) quantifies emissions as a function of vehicle modal activities. Hence, the vehicle operating mode distribution is the most vital input for running MOVES at the project level. The preparation of operating mode distributions requires significant efforts with respect to data collection and processing. This study is to develop operating mode distributions for both freeway and arterial facilities under different traffic conditions. For this purpose, in this study, we (1) collected/processed geographic information system (GIS) data, (2) developed a model of CO2 emissions and congestion from observations, (3) implemented the model to evaluate potential emission changes from a hypothetical roadway accident scenario. This study presents a framework by which practitioners can assess emission levels in the development of different strategies for traffic management and congestion mitigation. This paper prepared the primary input, that is, the operating mode ID distribution, required for running MOVES and developed models for estimating emissions for different types of roadways under different congestion levels. The results of this study will provide transportation planners or environmental analysts with the methods for qualitatively assessing the air quality impacts of different transportation operation and demand management strategies.
NASA Technical Reports Server (NTRS)
1981-01-01
The modified CG2000 crystal grower construction, installation, and machine check-out was completed. The process development check-out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Several exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. A contract presentation was made at the Project Integration Meeting at JPL, including cost-projections using contract projected throughput and machine parameters. Several growth runs on a development CG200 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input. Work continued for melt level, melt temperature, and diameter sensor development.
2014-01-17
CAPE CANAVERAL, Fla. – Members of the news media view the Project Morpheus prototype lander inside a hangar near the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Speaking to the media, from left are Jon Olansen, Morpheus project manager at Johnson Space Center in Houston, and Greg Gaddis, the Kennedy Morpheus and ALHAT site manager. Morpheus successfully completed its third free flight test Jan. 16. The 57-second test began at 1:15 p.m. EST with the Morpheus lander launching from the ground over a flame trench and ascending about 187 feet, nearly doubling the target ascent velocity from the last test in December 2013. The lander flew forward, covering about 154 feet in 20 seconds before descending and landing within 11 inches of its target on a dedicated pad inside the autonomous landing and hazard avoidance technology, or ALHAT, hazard field. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://www.nasa.gov/centers/johnson/exploration/morpheus. Photo credit: NASA/Kim Shiflett
Convergence in France facing Big Data era and Exascale challenges for Climate Sciences
NASA Astrophysics Data System (ADS)
Denvil, Sébastien; Dufresne, Jean-Louis; Salas, David; Meurdesoif, Yann; Valcke, Sophie; Caubel, Arnaud; Foujols, Marie-Alice; Servonnat, Jérôme; Sénési, Stéphane; Derouillat, Julien; Voury, Pascal
2014-05-01
The presentation will introduce a french national project : CONVERGENCE that has been funded for four years. This project will tackle big data and computational challenges faced by climate modeling community in HPC context. Model simulations are central to the study of complex mechanisms and feedbacks in the climate system and to provide estimates of future and past climate changes. Recent trends in climate modelling are to add more physical components in the modelled system, increasing the resolution of each individual component and the more systematic use of large suites of simulations to address many scientific questions. Climate simulations may therefore differ in their initial state, parameter values, representation of physical processes, spatial resolution, model complexity, and degree of realism or degree of idealisation. In addition, there is a strong need for evaluating, improving and monitoring the performance of climate models using a large ensemble of diagnostics and better integration of model outputs and observational data. High performance computing is currently reaching the exascale and has the potential to produce this exponential increase of size and numbers of simulations. However, post-processing, analysis, and exploration of the generated data have stalled and there is a strong need for new tools to cope with the growing size and complexity of the underlying simulations and datasets. Exascale simulations require new scalable software tools to generate, manage and mine those simulations ,and data to extract the relevant information and to take the correct decision. The primary purpose of this project is to develop a platform capable of running large ensembles of simulations with a suite of models, to handle the complex and voluminous datasets generated, to facilitate the evaluation and validation of the models and the use of higher resolution models. We propose to gather interdisciplinary skills to design, using a component-based approach, a specific programming environment for scalable scientific simulations and analytics, integrating new and efficient ways of deploying and analysing the applications on High Performance Computing (HPC) system. CONVERGENCE, gathering HPC and informatics expertise that cuts across the individual partners and the broader HPC community, will allow the national climate community to leverage information technology (IT) innovations to address its specific needs. Our methodology consists in developing an ensemble of generic elements needed to run the French climate models with different grids and different resolution, ensuring efficient and reliable execution of these models, managing large volume and number of data and allowing analysis of the results and precise evaluation of the models. These elements include data structure definition and input-output (IO), code coupling and interpolation, as well as runtime and pre/post-processing environments. A common data and metadata structure will allow transferring consistent information between the various elements. All these generic elements will be open source and publicly available. The IPSL-CM and CNRM-CM climate models will make use of these elements that will constitute a national platform for climate modelling. This platform will be used, in its entirety, to optimise and tune the next version of the IPSL-CM model and to develop a global coupled climate model with a regional grid refinement. It will also be used, at least partially, to run ensembles of the CNRM-CM model at relatively high resolution and to run a very-high resolution prototype of this model. The climate models we developed are already involved in many international projects. For instance we participate to the CMIP (Coupled Model Intercomparison Project) project that is very demanding but has a high visibility: its results are widely used and are in particular synthesised in the IPCC (Intergovernmental Panel on Climate Change) assessment reports. The CONVERGENCE project will constitute an invaluable step for the French climate community to prepare and better contribute to the next phase of the CMIP project.
Event visualisation in ALICE - current status and strategy for Run 3
NASA Astrophysics Data System (ADS)
Niedziela, Jeremi; von Haller, Barthélémy
2017-10-01
A Large Ion Collider Experiment (ALICE) is one of the four big experiments running at the Large Hadron Collider (LHC), which focuses on the study of the Quark-Gluon Plasma (QGP) being produced in heavy-ion collisions. The ALICE Event Visualisation Environment (AliEve) is a tool providing an interactive 3D model of the detector’s geometry and a graphical representation of the data. Together with the online reconstruction module, it provides important quality monitoring of the recorded data. As a consequence it has been used in the ALICE Run Control Centre during all stages of Run 2. Static screenshots from the online visualisation are published on the public website - ALICE LIVE. Dedicated converters have been developed to provide geometry and data for external projects. An example of such project is the Total Event Display (TEV) - a visualisation tool recently developed by the CERN Media Lab based on the Unity game engine. It can be easily deployed on any platform, including web and mobile platforms. Another external project is More Than ALICE - an augmented reality application for visitors, overlaying detector descriptions and event visualisations on the camera’s picture. For the future Run 3 both AliEve and TEV will be adapted to fit the ALICE O2 project. Several changes are required due to the new data formats, especially so-called Compressed Time Frames.
Theory of Constraints for Services: Past, Present, and Future
NASA Astrophysics Data System (ADS)
Ricketts, John A.
Theory of constraints (TOC) is a thinking process and a set of management applications based on principles that run counter to conventional wisdom. TOC is best known in the manufacturing and distribution sectors where it originated. Awareness is growing in some service sectors, such as Health Care. And it's been adopted in some high-tech industries, such as Computer Software. Until recently, however, TOC was barely known in the Professional, Scientific, and Technical Services (PSTS) sector. Professional services include law, accounting, and consulting. Scientific services include research and development. And Technical services include development, operation, and support of various technologies. The main reason TOC took longer to reach PSTS is it's much harder to apply TOC principles when services are highly customized. Nevertheless, with the management applications described in this chapter, TOC has been successfully adapted for PSTS. Those applications cover management of resources, projects, processes, and finances.
Resource Tracking Model Updates and Trade Studies
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Stambaugh, Imelda; Moore, Michael
2016-01-01
The Resource tracking model has been updated to capture system manager and project manager inputs. Both the Trick/GUNNS RTM simulator and the RTM mass balance spreadsheet have been revised to address inputs from system managers and to refine the way mass balance is illustrated. The revisions to the RTM included addition of a Plasma Pyrolysis Assembly (PPA) to recover hydrogen from Sabatier reactor methane which was vented in the prior version of the RTM. The effect of the PPA on the overall balance of resources in an exploration vehicle is illustrated in the increased recycle of vehicle oxygen. Additionally simulation of EVAs conducted from the exploration module was added. Since the focus of the exploration module is to provide a habitat during deep space operations the EVA simulation approach to EVA is based on ISS EVA protocol and processes. Case studies have been run to show the relative effect of performance changes on vehicle resources.
Oblinger, Carolyn J.
2004-01-01
The Triangle Area Water Supply Monitoring Project was initiated in October 1988 to provide long-term water-quality data for six area water-supply reservoirs and their tributaries. In addition, the project provides data that can be used to determine the effectiveness of large-scale changes in water-resource management practices, document differences in water quality among water-supply types (large multiuse reservoir, small reservoir, run-of-river), and tributary-loading and in-lake data for water-quality modeling of Falls and Jordan Lakes. By September 2001, the project had progressed in four phases and included as many as 34 sites (in 1991). Most sites were sampled and analyzed by the U.S. Geological Survey. Some sites were already a part of the North Carolina Division of Water Quality statewide ambient water-quality monitoring network and were sampled by the Division of Water Quality. The network has provided data on streamflow, physical properties, and concentrations of nutrients, major ions, metals, trace elements, chlorophyll, total organic carbon, suspended sediment, and selected synthetic organic compounds. Project quality-assurance activities include written procedures for sample collection, record management and archive, collection of field quality-control samples (blank samples and replicate samples), and monitoring the quality of field supplies. In addition to project quality-assurance activities, the quality of laboratory analyses was assessed through laboratory quality-assurance practices and an independent laboratory quality-control assessment provided by the U.S. Geological Survey Branch of Quality Systems through the Blind Inorganic Sample Project and the Organic Blind Sample Project.
Runs [ Open Access : Password Protected ] CESM Development CESM Runs [ Open Access : Password Protected ] WRF Development WRF Runs [ Open Access : Password Protected ] Climate Modeling Home Projects Links Literature Manuscripts Publications Polar Group Meeting (2012) ASGC Home ASGC Jobs Web Calendar Wiki Internal
2016-07-01
All Initial Designs for Final Fab Run Month 29 Masks and wafers prepared for Final Fab Run Month 30 Start of Final Fab Run Month 35 Completion of...Final Fab Run Month 36 Delivery of devices based on designs from other DEFYS performers Because of momentum from efforts prior to the start of...report (June 2016), our project is completed, with most tasks completed ahead of schedule. For example, the 3rd Fab Run started 5 months early and was
Using Group Research Projects to Stimulate Undergraduate Astronomy Major Learning
NASA Astrophysics Data System (ADS)
McGraw, Allison M.; Hardegree-Ullman, K. K.; Turner, J. D.; Shirley, Y. L.; Walker-LaFollette, A. M.; Robertson, A. N.; Carleton, T. M.; Smart, B. M.; Towner, A. P. M.; Wallace, S. C.; Smith, C. W.; Small, L. C.; Daugherty, M. J.; Guvenen, B. C.; Crawford, B. E.; Austin, C. L.; Schlingman, W. M.
2012-05-01
The University of Arizona Astronomy Club has been working on two large group research projects since 2009. One research project is a transiting extrasolar planet project that is fully student led and run. We observed the transiting exoplanets, TrES-3b and TrES-4b, with the 1.55 meter Kupier Telescope in near-UV and optical filters in order to detect any asymmetries between filters. The second project is a radio astronomy survey utilizing the Arizona Radio Observatory 12m telescope on Kitt Peak to study molecular gas in cold cores identified by the Planck all sky survey. This project provides a unique opportunity for a large group of students to get hands-on experience observing with a world-class radio observatory. These projects involve students in every single step of the process including: proposal writing to obtain telescope time on various Southern Arizona telescopes, observing at these telescopes, data reduction and analysis, managing large data sets, and presenting results at scientific meetings and in journal publications. The primary goal of these projects is to involve students in cutting-edge research early on in their undergraduate studies. The projects are designed to be continuous long term projects so that new students can easily join. As of January 2012 the extrasolar planet project became an official independent study class. New students learn from the more experienced students on the projects creating a learner-centered environment.
Phase 1 Development Report for the SESSA Toolkit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knowlton, Robert G.; Melton, Brad J; Anderson, Robert J.
The Site Exploitation System for Situational Awareness ( SESSA ) tool kit , developed by Sandia National Laboratories (SNL) , is a comprehensive de cision support system for crime scene data acquisition and Sensitive Site Exploitation (SSE). SESSA is an outgrowth of another SNL developed decision support system , the Building R estoration Operations Optimization Model (BROOM), a hardware/software solution for data acquisition, data management, and data analysis. SESSA was designed to meet forensic crime scene needs as defined by the DoD's Military Criminal Investigation Organiza tion (MCIO) . SESSA is a very comprehensive toolki t with a considerable amountmore » of database information managed through a Microsoft SQL (Structured Query Language) database engine, a Geographical Information System (GIS) engine that provides comprehensive m apping capabilities, as well as a an intuitive Graphical User Interface (GUI) . An electronic sketch pad module is included. The system also has the ability to efficiently generate necessary forms for forensic crime scene investigations (e.g., evidence submittal, laboratory requests, and scene notes). SESSA allows the user to capture photos on site, and can read and generate ba rcode labels that limit transcription errors. SESSA runs on PC computers running Windows 7, but is optimized for touch - screen tablet computers running Windows for ease of use at crime scenes and on SSE deployments. A prototype system for 3 - dimensional (3 D) mapping and measur e ments was also developed to complement the SESSA software. The mapping system employs a visual/ depth sensor that captures data to create 3D visualizations of an interior space and to make distance measurements with centimeter - level a ccuracy. Output of this 3D Model Builder module provides a virtual 3D %22walk - through%22 of a crime scene. The 3D mapping system is much less expensive and easier to use than competitive systems. This document covers the basic installation and operation of th e SESSA tool kit in order to give the user enough information to start using the tool kit . SESSA is currently a prototype system and this documentation covers the initial release of the tool kit . Funding for SESSA was provided by the Department of Defense (D oD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL) . ACKNOWLEDGEMENTS The authors wish to acknowledge the funding support for the development of the Site Exploitation System for Situational Awareness (SESSA) toolkit from the Department of Defense (DoD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL). Special thanks to Mr. Garold Warner, of DFSC, who served as the Project Manager. Individuals that worked on the design, functional attributes, algorithm development, system arc hitecture, and software programming include: Robert Knowlton, Brad Melton, Robert Anderson, and Wendy Amai.« less
The University and Manpower Educational Services: An Experimental and Demonstration Project.
ERIC Educational Resources Information Center
Williams, J. Earl
The goal of the Manpower Educational Services Project at the University of Houston was, in the short run, to explore using a university's capability and position in the community to contribute to the understanding and functioning of manpower programs in its geographic area. In the long run, it was hoped that a permanent center could be established…
Notes on a storage manager for the Clouds kernel
NASA Technical Reports Server (NTRS)
Pitts, David V.; Spafford, Eugene H.
1986-01-01
The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.
Characterizing Sub-Daily Flow Regimes: Implications of Hydrologic Resolution on Ecohydrology Studies
Bevelhimer, Mark S.; McManamay, Ryan A.; O'Connor, B.
2014-05-26
Natural variability in flow is a primary factor controlling geomorphic and ecological processes in riverine ecosystems. Within the hydropower industry, there is growing pressure from environmental groups and natural resource managers to change reservoir releases from daily peaking to run-of-river operations on the basis of the assumption that downstream biological communities will improve under a more natural flow regime. In this paper, we discuss the importance of assessing sub-daily flows for understanding the physical and ecological dynamics within river systems. We present a variety of metrics for characterizing sub-daily flow variation and use these metrics to evaluate general trends amongmore » streams affected by peaking hydroelectric projects, run-of-river projects and streams that are largely unaffected by flow altering activities. Univariate and multivariate techniques were used to assess similarity among different stream types on the basis of these sub-daily metrics. For comparison, similar analyses were performed using analogous metrics calculated with mean daily flow values. Our results confirm that sub-daily flow metrics reveal variation among and within streams that are not captured by daily flow statistics. Using sub-daily flow statistics, we were able to quantify the degree of difference between unaltered and peaking streams and the amount of similarity between unaltered and run-of-river streams. The sub-daily statistics were largely uncorrelated with daily statistics of similar scope. Furthermore, on short temporal scales, sub-daily statistics reveal the relatively constant nature of unaltered streamreaches and the highly variable nature of hydropower-affected streams, whereas daily statistics show just the opposite over longer temporal scales.« less
The Practicalities of Crowdsourcing: Lessons from the Tea Bag Index - UK
NASA Astrophysics Data System (ADS)
Duddigan, Sarah; Alexander, Paul; Shaw, Liz; Collins, Chris
2017-04-01
The Tea Bag Index -UK is a collaborative project between the University of Reading and the Royal Horticultural Society (RHS), working with members of the gardening community as citizen scientists. This project aims to quantify how decomposition varies across the country, and whether decomposition is influenced by how gardeners manage their soil, particularly with respect to the application of compost. Launched in 2015 as part of a PhD project, the Tea Bag Index- UK project asks willing volunteers to bury tea bags in their gardens, as part of a large scale, litter bag style decomposition rate study. Over 450 sets of tea bags have been dispatched to participants, across the length and breadth of the UK. The group was largely recruited via social media, magazine articles and public engagement events and active discourse was undertaken with these citizen scientists using Facebook, Twitter and regular email communication. In order to run a successful crowdsourcing citizen science project there are number of stages that need to be considered including (but not limited to): planning; launch and recruitment; communications; and feedback. Throughout a project of this nature an understanding of the motivations of your volunteers is vital. Reflecting on these motivations while publicising the project, and communicating regularly with its participants is incredibly important for a successful project.
NASA Astrophysics Data System (ADS)
Görgen, K.; Pfister, L.
2008-12-01
The anticipated climate change will lead to modified hydro-meteorological regimes that influence discharge behaviour and hydraulics of rivers. This has variable impacts on managed (anthropogenic) and unmanaged (natural) systems, depending on their sensitivity and vulnerability (ecology, economy, infrastructure, transport, energy production, water management, etc.). Decision makers in these contexts need adequate adaptation strategies to minimize adverse effects of climate change, i.e. an improved knowledge on the potential impacts including uncertainties means an extension of the informed options open to users. The goal of the highly applied study presented here is the development of joint, consistent climate and discharge projections for the international Rhine River catchments (Switzerland, France, Germany, Netherlands) in order to assess future changes of hydro-meteorological regimes in the meso- and macroscale Rhine River catchments and to derive and improve the understanding of such impacts on hydrologic and hydraulic processes. The RheinBlick2050 project is an international effort initiated by the International Commission for the Hydrology of the Rhine Basin (CHR) in close cooperation with the International Commission for the Protection of the Rhine. The core experiment design foresees a data-synthesis, multi-model approach where (transient) (bias- corrected) regional climate change projections are used as forcing data for existing calibrated hydrological (and hydraulic) models at a daily temporal resolution over mesoscale catchments of the Rhine River. Mainly for validation purposes, hydro-meteorological observations from national weather services are compiled into a new consistent 5 km x 5 km reference dataset from 1961 to 2005. RCM data are mainly used from the ENSEMBLES project and other existing dynamical downscaling model runs to derive probabilistic ensembles and thereby also access uncertainties on a regional scale. A benchmarking is helping to identify those atmospheric forcing data that ideally suit the needs for the subsequent hydrological model runs with the LARSIM and HBV models and evaluate those simulations too. As a result, usable information and quantifiable statements (e.g. extreme value statistics, uncertainty assessment, validation), that might form the basis for further planning or policy relevant decisions, are to be derived. Our analyses are highly influenced by the requirements of the potential users and stakeholders from government agencies who shall make use of the data and results. Here we present first results of the application of the complete data processing and modelling chain towards discharge projections on a subset of input data, albeit still without any bias correction applied to the meteorological forcing data.
Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds
NASA Astrophysics Data System (ADS)
Li, Rui; Chen, Lei; Li, Wen-Syan
Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.
CloVR-Comparative: automated, cloud-enabled comparative microbial genome sequence analysis pipeline.
Agrawal, Sonia; Arze, Cesar; Adkins, Ricky S; Crabtree, Jonathan; Riley, David; Vangala, Mahesh; Galens, Kevin; Fraser, Claire M; Tettelin, Hervé; White, Owen; Angiuoli, Samuel V; Mahurkar, Anup; Fricke, W Florian
2017-04-27
The benefit of increasing genomic sequence data to the scientific community depends on easy-to-use, scalable bioinformatics support. CloVR-Comparative combines commonly used bioinformatics tools into an intuitive, automated, and cloud-enabled analysis pipeline for comparative microbial genomics. CloVR-Comparative runs on annotated complete or draft genome sequences that are uploaded by the user or selected via a taxonomic tree-based user interface and downloaded from NCBI. CloVR-Comparative runs reference-free multiple whole-genome alignments to determine unique, shared and core coding sequences (CDSs) and single nucleotide polymorphisms (SNPs). Output includes short summary reports and detailed text-based results files, graphical visualizations (phylogenetic trees, circular figures), and a database file linked to the Sybil comparative genome browser. Data up- and download, pipeline configuration and monitoring, and access to Sybil are managed through CloVR-Comparative web interface. CloVR-Comparative and Sybil are distributed as part of the CloVR virtual appliance, which runs on local computers or the Amazon EC2 cloud. Representative datasets (e.g. 40 draft and complete Escherichia coli genomes) are processed in <36 h on a local desktop or at a cost of <$20 on EC2. CloVR-Comparative allows anybody with Internet access to run comparative genomics projects, while eliminating the need for on-site computational resources and expertise.
Idaho Habitat/Natural Production Monitoring Part I, 1995 Annual Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall-Griswold, J.A.; Petrosky, C.E.
The Idaho Department of Fish and Game (IDFG) has been monitoring trends in juvenile spring and summer chinook salmon, Oncorhynchus tshawytscha, and steelhead trout, O. mykiss, populations in the Salmon, Clearwater, and lower Snake River drainages for the past 12 years. This work is the result of a program to protect, mitigate, and enhance fish and wildlife affected by the development and operation of hydroelectric power plants on the Columbia River. Project 91-73, Idaho Natural Production Monitoring, consists of two subprojects: General Monitoring and Intensive Monitoring. This report updates and summarizes data through 1995 for the General Parr Monitoring (GPM)more » database to document status and trends of classes of wild and natural chinook salmon and steelhead trout populations. A total of 281 stream sections were sampled in 1995 to monitor trends in spring and summer chinook salmon Oncorhynchus tshawytscha and steelhead trout O. mykiss parr populations in Idaho. Percent carrying capacity and density estimates were summarized for 1985--1995 by different classes of fish: wild A-run steelhead trout, wild B-run steelhead trout, natural A-run steelhead trout, natural B-run steelhead trout, wild spring and summer chinook salmon, and natural spring and summer chinook salmon. The 1995 data were also summarized by subbasins as defined in Idaho Department of Fish and Game`s 1992--1996 Anadromous Fish Management Plan.« less
NASA Astrophysics Data System (ADS)
Turner, M. A.; Miller, S.; Gregory, A.; Cadol, D. D.; Stone, M. C.; Sheneman, L.
2016-12-01
We present the Coupled RipCAS-DFLOW (CoRD) modeling system created to encapsulate the workflow to analyze the effects of stream flooding on vegetation succession. CoRD provides an intuitive command-line and web interface to run DFLOW and RipCAS in succession over many years automatically, which is a challenge because, for our application, DFLOW must be run on a supercomputing cluster via the PBS job scheduler. RipCAS is a vegetation succession model, and DFLOW is a 2D open channel flow model. Data adaptors have been developed to seamlessly connect DFLOW output data to be RipCAS inputs, and vice-versa. CoRD provides automated statistical analysis and visualization, plus automatic syncing of input and output files and model run metadata to the hydrological data management system HydroShare using its excellent Python REST client. This combination of technologies and data management techniques allows the results to be shared with collaborators and eventually published. Perhaps most importantly, it allows results to be easily reproduced via either the command-line or web user interface. This system is a result of collaboration between software developers and hydrologists participating in the Western Consortium for Watershed Analysis, Visualization, and Exploration (WC-WAVE). Because of the computing-intensive nature of this particular workflow, including automating job submission/monitoring and data adaptors, software engineering expertise is required. However, the hydrologists provide the software developers with a purpose and ensure a useful, intuitive tool is developed. Our hydrologists contribute software, too: RipCAS was developed from scratch by hydrologists on the team as a specialized, open-source version of the Computer Aided Simulation Model for Instream Flow and Riparia (CASiMiR) vegetation model; our hydrologists running DFLOW provided numerous examples and help with the supercomputing system. This project is written in Python, a popular language in the geosciences and a good beginner programming language, and is completely open source. It can be accessed at https://github.com/VirtualWatershed/CoRD with documentation available at http://virtualwatershed.github.io/CoRD. These facts enable continued development and use beyond the involvement of the current authors.
NASA Astrophysics Data System (ADS)
Peel, M. C.; Srikanthan, R.; McMahon, T. A.; Karoly, D. J.
2015-04-01
Two key sources of uncertainty in projections of future runoff for climate change impact assessments are uncertainty between global climate models (GCMs) and within a GCM. Within-GCM uncertainty is the variability in GCM output that occurs when running a scenario multiple times but each run has slightly different, but equally plausible, initial conditions. The limited number of runs available for each GCM and scenario combination within the Coupled Model Intercomparison Project phase 3 (CMIP3) and phase 5 (CMIP5) data sets, limits the assessment of within-GCM uncertainty. In this second of two companion papers, the primary aim is to present a proof-of-concept approximation of within-GCM uncertainty for monthly precipitation and temperature projections and to assess the impact of within-GCM uncertainty on modelled runoff for climate change impact assessments. A secondary aim is to assess the impact of between-GCM uncertainty on modelled runoff. Here we approximate within-GCM uncertainty by developing non-stationary stochastic replicates of GCM monthly precipitation and temperature data. These replicates are input to an off-line hydrologic model to assess the impact of within-GCM uncertainty on projected annual runoff and reservoir yield. We adopt stochastic replicates of available GCM runs to approximate within-GCM uncertainty because large ensembles, hundreds of runs, for a given GCM and scenario are unavailable, other than the Climateprediction.net data set for the Hadley Centre GCM. To date within-GCM uncertainty has received little attention in the hydrologic climate change impact literature and this analysis provides an approximation of the uncertainty in projected runoff, and reservoir yield, due to within- and between-GCM uncertainty of precipitation and temperature projections. In the companion paper, McMahon et al. (2015) sought to reduce between-GCM uncertainty by removing poorly performing GCMs, resulting in a selection of five better performing GCMs from CMIP3 for use in this paper. Here we present within- and between-GCM uncertainty results in mean annual precipitation (MAP), mean annual temperature (MAT), mean annual runoff (MAR), the standard deviation of annual precipitation (SDP), standard deviation of runoff (SDR) and reservoir yield for five CMIP3 GCMs at 17 worldwide catchments. Based on 100 stochastic replicates of each GCM run at each catchment, within-GCM uncertainty was assessed in relative form as the standard deviation expressed as a percentage of the mean of the 100 replicate values of each variable. The average relative within-GCM uncertainties from the 17 catchments and 5 GCMs for 2015-2044 (A1B) were MAP 4.2%, SDP 14.2%, MAT 0.7%, MAR 10.1% and SDR 17.6%. The Gould-Dincer Gamma (G-DG) procedure was applied to each annual runoff time series for hypothetical reservoir capacities of 1 × MAR and 3 × MAR and the average uncertainties in reservoir yield due to within-GCM uncertainty from the 17 catchments and 5 GCMs were 25.1% (1 × MAR) and 11.9% (3 × MAR). Our approximation of within-GCM uncertainty is expected to be an underestimate due to not replicating the GCM trend. However, our results indicate that within-GCM uncertainty is important when interpreting climate change impact assessments. Approximately 95% of values of MAP, SDP, MAT, MAR, SDR and reservoir yield from 1 × MAR or 3 × MAR capacity reservoirs are expected to fall within twice their respective relative uncertainty (standard deviation/mean). Within-GCM uncertainty has significant implications for interpreting climate change impact assessments that report future changes within our range of uncertainty for a given variable - these projected changes may be due solely to within-GCM uncertainty. Since within-GCM variability is amplified from precipitation to runoff and then to reservoir yield, climate change impact assessments that do not take into account within-GCM uncertainty risk providing water resources management decision makers with a sense of certainty that is unjustified.
Run-of-river power plants in Alpine regions: whither optimal capacity?
NASA Astrophysics Data System (ADS)
Lazzaro, Gianluca; Botter, Gianluca
2015-04-01
Hydropower is the major renewable electricity generation technology worldwide. The future expansion of this technology mostly relies on the development of small run-of-river projects, in which a fraction of the running flows is diverted from the river to a turbine for energy production. Even though small hydro inflicts a smaller impact on aquatic ecosystems and local communities compared to large dams, it cannot prevent stresses on plant, animal, and human well-being. This is especially true in mountain regions where the plant outflow is located several kilometers downstream of the intake, thereby inducing the depletion of river reaches of considerable length. Moreover, the negative cumulative effects of run-of-river systems operating along the same river threaten the ability of stream networks to supply ecological corridors for plants, invertebrates or fishes, and support biodiversity. Research in this area is severely lacking. Therefore, the prediction of the long-term impacts associated to the expansion of run-of-river projects induced by global-scale incentive policies remains highly uncertain. This contribution aims at providing objective tools to address the preliminary choice of the capacity of a run-of-river hydropower plant when the economic value of the plant and the alteration of the flow regime are simultaneously accounted for. This is done using the concepts of Pareto-optimality and Pareto-dominance, which are powerful tools suited to face multi-objective optimization in presence of conflicting goals, such as the maximization of the profitability and the minimization of the hydrologic disturbance induced by the plant in the river reach between the intake and the outflow. The application to a set of case studies belonging to the Piave River basin (Italy) suggests that optimal solutions are strongly dependent the natural flow regime at the plant intake. While in some cases (namely, reduced streamflow variability) the optimal trade-off between economic profitability and hydrologic disturbance is well identified, in other cases (enhanced streamflow variability) multiple options and/or ranges of optimal capacities may be devised. Such alternatives offer to water managers an objective basis to identify optimal allocation of resources and policy actions. Small hydro technology is likely to gain a higher social value in the next decades if the environmental and hydrologic footprint associated to the energetic exploitation of surface water will take a higher priority in civil infrastructures planning.
Domain Specific Language Support for Exascale. Final Project Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baden, Scott
The project developed a domain specific translator enable legacy MPI source code to tolerate communication delays, which are increasing over time due to technological factors. The translator performs source-to-source translation that incorporates semantic information into the translation process. The output of the translator is a C program runs as a data driven program, and uses an existing run time to overlap communication automatically
A Customizable Dashboarding System for Watershed Model Interpretation
NASA Astrophysics Data System (ADS)
Easton, Z. M.; Collick, A.; Wagena, M. B.; Sommerlot, A.; Fuka, D.
2017-12-01
Stakeholders, including policymakers, agricultural water managers, and small farm managers, can benefit from the outputs of commonly run watershed models. However, the information that each stakeholder needs is be different. While policy makers are often interested in the broader effects that small farm management may have on a watershed during extreme events or over long periods, farmers are often interested in field specific effects at daily or seasonal period. To provide stakeholders with the ability to analyze and interpret data from large scale watershed models, we have developed a framework that can support custom exploration of the large datasets produced. For the volume of data produced by these models, SQL-based data queries are not efficient; thus, we employ a "Not Only SQL" (NO-SQL) query language, which allows data to scale in both quantity and query volumes. We demonstrate a stakeholder customizable Dashboarding system that allows stakeholders to create custom `dashboards' to summarize model output specific to their needs. Dashboarding is a dynamic and purpose-based visual interface needed to display one-to-many database linkages so that the information can be presented for a single time period or dynamically monitored over time and allows a user to quickly define focus areas of interest for their analysis. We utilize a single watershed model that is run four times daily with a combined set of climate projections, which are then indexed, and added to an ElasticSearch datastore. ElasticSearch is a NO-SQL search engine built on top of Apache Lucene, a free and open-source information retrieval software library. Aligned with the ElasticSearch project is the open source visualization and analysis system, Kibana, which we utilize for custom stakeholder dashboarding. The dashboards create a visualization of the stakeholder selected analysis and can be extended to recommend robust strategies to support decision-making.
Space Communications Emulation Facility
NASA Technical Reports Server (NTRS)
Hill, Chante A.
2004-01-01
Establishing space communication between ground facilities and other satellites is a painstaking task that requires many precise calculations dealing with relay time, atmospheric conditions, and satellite positions, to name a few. The Space Communications Emulation Facility (SCEF) team here at NASA is developing a facility that will approximately emulate the conditions in space that impact space communication. The emulation facility is comprised of a 32 node distributed cluster of computers; each node representing a satellite or ground station. The objective of the satellites is to observe the topography of the Earth (water, vegetation, land, and ice) and relay this information back to the ground stations. Software originally designed by the University of Kansas, labeled the Emulation Manager, controls the interaction of the satellites and ground stations, as well as handling the recording of data. The Emulation Manager is installed on a Linux Operating System, employing both Java and C++ programming codes. The emulation scenarios are written in extensible Markup Language, XML. XML documents are designed to store, carry, and exchange data. With XML documents data can be exchanged between incompatible systems, which makes it ideal for this project because Linux, MAC and Windows Operating Systems are all used. Unfortunately, XML documents cannot display data like HTML documents. Therefore, the SCEF team uses XML Schema Definition (XSD) or just schema to describe the structure of an XML document. Schemas are very important because they have the capability to validate the correctness of data, define restrictions on data, define data formats, and convert data between different data types, among other things. At this time, in order for the Emulation Manager to open and run an XML emulation scenario file, the user must first establish a link between the schema file and the directory under which the XML scenario files are saved. This procedure takes place on the command line on the Linux Operating System. Once this link has been established the Emulation manager validates all the XML files in that directory against the schema file, before the actual scenario is run. Using some very sophisticated commercial software called the Satellite Tool Kit (STK) installed on the Linux box, the Emulation Manager is able to display the data and graphics generated by the execution of a XML emulation scenario file. The Emulation Manager software is written in JAVA programming code. Since the SCEF project is in the developmental stage, the source code for this type of software is being modified to better fit the requirements of the SCEF project. Some parameters for the emulation are hard coded, set at fixed values. Members of the SCEF team are altering the code to allow the user to choose the values of these hard coded parameters by inserting a toolbar onto the preexisting GUI.
A report upon the Grand Coulee Fish Maintenance Project 1939-1947
Fish, F.F.; Hanavan, Mitchell G.
1948-01-01
The construction or Grand Coulee Dam, on the upper Columbia River, involved the loss of 1,140 lineal miles of spawning and rearing stream to the production of anadromous fishes. The fact that the annual value of these fish runs to the nation was estimated at $250,000 justified reasonable expenditures to assure their perpetuation. It was found economically infeasible to safely collect and pass adult fish upstream and fingerling fish downstream at the dam because of the tremendous flow of the river and the 320 foot vertical difference in elevation between forebay and tailrace.The Grand Coulee Fish-Maintenance Project, undertaken by the United States Fish and Wildlife Service in 1939, consisted in relocating the anadroumous runs of the upper Columbia River to four major tributaries entering below the Grand Coulee damsite. These streams were believed capable of supporting several times their existing, badly depleted, run. The plan was predicated upon the assumption that the relocated runs, in conformity with their "homing tendency", would return to the lower tributaries rather than attempt to reach their ancestral spawning grounds above Grand Coulee Dam. This interim report covers the history and accomplishments of the Grand Coulee Fish-Maintenance Project through the initial period of relocating the rune as well as the first four years of the permanent program. Results obtained to date indicate conclusive success in diverting the upper Columbia fish runs into the accessible lower tributaries. The results also indicate, less conclusively, that - in spite of many existing handicaps - the upper Columbia salmon and steelhead runs may be rehabilitated through the integrated program of natural and artificial propagation incorporated in the Grand Coulee Fish-Maintenance Project.
Shafer, S.L.; Atkins, J.; Bancroft, B.A.; Bartlein, P.J.; Lawler, J.J.; Smith, B.; Wilsey, C.B.
2012-01-01
The responses of species and ecosystems to future climate changes will present challenges for conservation and natural resource managers attempting to maintain both species populations and essential habitat. This report describes projected future changes in climate and vegetation for three study areas surrounding the military installations of Fort Benning, Georgia, Fort Hood, Texas, and Fort Irwin, California. Projected climate changes are described for the time period 2070–2099 (30-year mean) as compared to 1961–1990 (30-year mean) for each study area using data simulated by the coupled atmosphere-ocean general circulation models CCSM3, CGCM3.1(T47), and UKMO-HadCM3, run under the B1, A1B, and A2 future greenhouse gas emissions scenarios. These climate data are used to simulate potential changes in important components of the vegetation for each study area using LPJ, a dynamic global vegetation model, and LPJ-GUESS, a dynamic vegetation model optimized for regional studies. The simulated vegetation results are compared with observed vegetation data for the study areas. Potential effects of the simulated future climate and vegetation changes for species and habitats of management concern are discussed in each study area, with a particular focus on federally listed threatened and endangered species.
1985-01-01
The US Agency for International Development (AID) has discontinued its contraceptive social marketing project in Ecuador after 2 1/2 years without a sale. USAID had awarded a 3-year US$1.2 million grant to the program's contractor, the John Snow Public Health Group Inc. The project was run by Ecuador's national family planning association. This is only the 3rd time USAID has terminated a social marketing program since entering this field in 1973. Impediments to the program's operation included product price hikes and supply shortages as a result of teh inflation and currency devaluation in Ecuador in recent years. Government opposition to the sales of donated contraceptive supplies further set back the program. The name chosen for the condom distributed by the program, Liber, had to be changed since a company importing sanitary napkins was using the name Liberty and objected. The program's peculiar organizational structur is also considered to have played a role in the program's failure. Rather than having a single authority responsible for the program, a 2-headed organizational design was used. Program funds were controlled by the contractor, but the family planning organization managed day to day operations. Unified management has enabled programs in other countries to survive problems such as inflation, brand registration, and product and price approvals.
Biermann, Martin
2014-04-01
Clinical trials aiming for regulatory approval of a therapeutic agent must be conducted according to Good Clinical Practice (GCP). Clinical Data Management Systems (CDMS) are specialized software solutions geared toward GCP-trials. They are however less suited for data management in small non-GCP research projects. For use in researcher-initiated non-GCP studies, we developed a client-server database application based on the public domain CakePHP framework. The underlying MySQL database uses a simple data model based on only five data tables. The graphical user interface can be run in any web browser inside the hospital network. Data are validated upon entry. Data contained in external database systems can be imported interactively. Data are automatically anonymized on import, and the key lists identifying the subjects being logged to a restricted part of the database. Data analysis is performed by separate statistics and analysis software connecting to the database via a generic Open Database Connectivity (ODBC) interface. Since its first pilot implementation in 2011, the solution has been applied to seven different clinical research projects covering different clinical problems in different organ systems such as cancer of the thyroid and the prostate glands. This paper shows how the adoption of a generic web application framework is a feasible, flexible, low-cost, and user-friendly way of managing multidimensional research data in researcher-initiated non-GCP clinical projects. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
3MdB: the Mexican Million Models database
NASA Astrophysics Data System (ADS)
Morisset, C.; Delgado-Inglada, G.
2014-10-01
The 3MdB is an original effort to construct a large multipurpose database of photoionization models. This is a more modern version of a previous attempt based on Cloudy3D and IDL tools. It is accessed by MySQL requests. The models are obtained using the well known and widely used Cloudy photoionization code (Ferland et al, 2013). The database is aimed to host grids of models with different references to identify each project and to facilitate the extraction of the desired data. We present here a description of the way the database is managed and some of the projects that use 3MdB. Anybody can ask for a grid to be run and stored in 3MdB, to increase the visibility of the grid and the potential side applications of it.
Morpheus Alhat Integrated and Laser Test
2014-03-21
CAPE CANAVERAL, Fla. – Engineers run an automated landing and hazard avoidance technology, or ALHAT, and laser test on the Project Morpheus prototype lander at a new launch site at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. The launch pad was moved to a different location at the landing facility to support the next phase of flight testing. The seventh free flight test of Morpheus occurred on March 11. The 83-second test began at 3:41 p.m. EDT with the Morpheus lander launching from the ground over a flame trench and ascending to 580 feet. Morpheus then flew its fastest downrange trek at 30 mph, travelling farther than before, 837 feet. The lander performed a 42-foot divert to emulate a hazard avoidance maneuver before descending and touching down on Landing Site 2, at the northern landing pad inside the ALHAT hazard field. Morpheus landed within one foot of its intended target. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Kim Shiflett
Principles and application of LIMS in mouse clinics.
Maier, Holger; Schütt, Christine; Steinkamp, Ralph; Hurt, Anja; Schneltzer, Elida; Gormanns, Philipp; Lengger, Christoph; Griffiths, Mark; Melvin, David; Agrawal, Neha; Alcantara, Rafael; Evans, Arthur; Gannon, David; Holroyd, Simon; Kipp, Christian; Raj, Navis Pretheeba; Richardson, David; LeBlanc, Sophie; Vasseur, Laurent; Masuya, Hiroshi; Kobayashi, Kimio; Suzuki, Tomohiro; Tanaka, Nobuhiko; Wakana, Shigeharu; Walling, Alison; Clary, David; Gallegos, Juan; Fuchs, Helmut; de Angelis, Martin Hrabě; Gailus-Durner, Valerie
2015-10-01
Large-scale systemic mouse phenotyping, as performed by mouse clinics for more than a decade, requires thousands of mice from a multitude of different mutant lines to be bred, individually tracked and subjected to phenotyping procedures according to a standardised schedule. All these efforts are typically organised in overlapping projects, running in parallel. In terms of logistics, data capture, data analysis, result visualisation and reporting, new challenges have emerged from such projects. These challenges could hardly be met with traditional methods such as pen & paper colony management, spreadsheet-based data management and manual data analysis. Hence, different Laboratory Information Management Systems (LIMS) have been developed in mouse clinics to facilitate or even enable mouse and data management in the described order of magnitude. This review shows that general principles of LIMS can be empirically deduced from LIMS used by different mouse clinics, although these have evolved differently. Supported by LIMS descriptions and lessons learned from seven mouse clinics, this review also shows that the unique LIMS environment in a particular facility strongly influences strategic LIMS decisions and LIMS development. As a major conclusion, this review states that there is no universal LIMS for the mouse research domain that fits all requirements. Still, empirically deduced general LIMS principles can serve as a master decision support template, which is provided as a hands-on tool for mouse research facilities looking for a LIMS.
GeoDataspaces: Simplifying Data Management Tasks with Globus
NASA Astrophysics Data System (ADS)
Malik, T.; Chard, K.; Tchoua, R. B.; Foster, I.
2014-12-01
Data and its management are central to modern scientific enterprise. Typically, geoscientists rely on observations and model output data from several disparate sources (file systems, RDBMS, spreadsheets, remote data sources). Integrated data management solutions that provide intuitive semantics and uniform interfaces, irrespective of the kind of data source are, however, lacking. Consequently, geoscientists are left to conduct low-level and time-consuming data management tasks, individually, and repeatedly for discovering each data source, often resulting in errors in handling. In this talk we will describe how the EarthCube GeoDataspace project is improving this situation for seismologists, hydrologists, and space scientists by simplifying some of the existing data management tasks that arise when developing computational models. We will demonstrate a GeoDataspace, bootstrapped with "geounits", which are self-contained metadata packages that provide complete description of all data elements associated with a model run, including input/output and parameter files, model executable and any associated libraries. Geounits link raw and derived data as well as associating provenance information describing how data was derived. We will discuss challenges in establishing geounits and describe machine learning and human annotation approaches that can be used for extracting and associating ad hoc and unstructured scientific metadata hidden in binary formats with data resources and models. We will show how geounits can improve search and discoverability of data associated with model runs. To support this model, we will describe efforts related towards creating a scalable metadata catalog that helps to maintain, search and discover geounits within the Globus network of accessible endpoints. This talk will focus on the issue of creating comprehensive personal inventories of data assets for computational geoscientists, and describe a publishing mechanism, which can be used to feed into national, international, or thematic discovery portals.
77 FR 62378 - Supervisory and Company-Run Stress Test Requirements for Covered Companies
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-12
... to Both Supervisory and Company-Run Stress Tests The Board designed the final rule in a manner to... to conduct supervisory stress test; and project a company's losses, pre-provision net revenue...-Run Stress Test Requirements; Final Rules #0;#0;Federal Register / Vol. 77 , No. 198 / Friday, October...
SFB754 - data management in large interdisciplinary collaborative research projects: what matters?
NASA Astrophysics Data System (ADS)
Mehrtens, Hela; Springer, Pina; Schirnick, Carsten; Schelten, Christiane K.
2016-04-01
Data management for SFB 754 is an integral part of the joint data management team at GEOMAR Helmholtz Centre for Ocean Research Kiel, a cooperation of the Cluster of Excellence "Future Ocean", the SFB 754 and other current and former nationally and EU-funded projects. The coalition successfully established one common data management infrastructure for marine sciences in Kiel. It aims to help researchers to better document the data lifecycle from acquisition to publication and share their results already during the project phase. The infrastructure is continuously improved by integration of standard tools and developing extensions in close cooperation with scientists, data centres and other research institutions. Open and frequent discussion of data management topics during SFB 754 meetings and seminars and efficient cooperation with its coordination office allowed gradual establishment of better data management practices. Furthermore a data policy was agreed on to ensure proper usage of data sets, even unpublished ones, schedules data upload and dissemination and enforces long-term public availability of the research outcome. Acceptance of the infrastructure is also backed by easy usage of the web-based platform for data set documentation and exchange among all research disciplines of the SFB 754. Members of the data management team act as data curators and assist in data publication in World Data Centres (e.g. PANGAEA). Cooperation with world data centres makes the research data then globally searchable and accessible while links to the data producers ensure citability and provide points of contact for the scientific community. A complete record of SFB 754 publications is maintained within the institutional repository for full text print publications by the GEOMAR library. This repository is strongly linked with the data management information system providing dynamic and up-to-date overviews on the various ties between publications and available data sets, expeditions and projects. Such views are also frequently used for the website and reports by the SFB 754 scientific community. The concept of a joint approach initiated by large-scale projects and participating institutions in order to establish a single data management infrastructure has proven to be very successful. We have experienced a snowball-like propagation among marine researchers at GEOMAR and Kiel University, they continue to engage data management services well known from collaboration with SFB 754. But we also observe an ongoing demand for training of new junior (and senior) scientists and continuous need for adaption to new methods and techniques. Only a standardized and consistent data management warrants completeness and integrity of published research data related to their peer-reviewed journal publications in the long run. Based on our daily experience this is best achieved, if not only, by skilled and experienced staff in a dedicated data management team which persists beyond the funding period of research projects. It can effectively carry on and impact by continuous personal contact, consultation and training of researchers on-site. (This poster is linked to the presentation by Dr. Christiane K. Schelten)
Opportunistic Computing with Lobster: Lessons Learned from Scaling up to 25k Non-Dedicated Cores
NASA Astrophysics Data System (ADS)
Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Yannakopoulos, Anna; Tovar, Benjamin; Donnelly, Patrick; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2017-10-01
We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges. Categories: Workflows can now be divided into categories based on their required system resources. This allows the batch queueing system to optimize assignment of tasks to nodes with the appropriate capabilities. Within each category, limits can be specified for the number of running jobs to regulate the utilization of communication bandwidth. System resource specifications for a task category can now be modified while a project is running, avoiding the need to restart the project if resource requirements differ from the initial estimates. Lobster now implements time limits on each task category to voluntarily terminate tasks. This allows partially completed work to be recovered. Workflow dependency specification: One workflow often requires data from other workflows as input. Rather than waiting for earlier workflows to be completed before beginning later ones, Lobster now allows dependent tasks to begin as soon as sufficient input data has accumulated. Resource monitoring: Lobster utilizes a new capability in Work Queue to monitor the system resources each task requires in order to identify bottlenecks and optimally assign tasks. The capability of the Lobster opportunistic workflow management system for HEP computation has been significantly increased. We have demonstrated efficient utilization of 25 000 non-dedicated cores and achieved a data input rate of 30 Gb/s and an output rate of 500GB/h. This has required new capabilities in task categorization, workflow dependency specification, and resource monitoring.
Elliott, Joshua; Deryng, Delphine; Müller, Christoph; Frieler, Katja; Konzmann, Markus; Gerten, Dieter; Glotter, Michael; Flörke, Martina; Wada, Yoshihide; Best, Neil; Eisner, Stephanie; Fekete, Balázs M; Folberth, Christian; Foster, Ian; Gosling, Simon N; Haddeland, Ingjerd; Khabarov, Nikolay; Ludwig, Fulco; Masaki, Yoshimitsu; Olin, Stefan; Rosenzweig, Cynthia; Ruane, Alex C; Satoh, Yusuke; Schmid, Erwin; Stacke, Tobias; Tang, Qiuhong; Wisser, Dominik
2014-03-04
We compare ensembles of water supply and demand projections from 10 global hydrological models and six global gridded crop models. These are produced as part of the Inter-Sectoral Impacts Model Intercomparison Project, with coordination from the Agricultural Model Intercomparison and Improvement Project, and driven by outputs of general circulation models run under representative concentration pathway 8.5 as part of the Fifth Coupled Model Intercomparison Project. Models project that direct climate impacts to maize, soybean, wheat, and rice involve losses of 400-1,400 Pcal (8-24% of present-day total) when CO2 fertilization effects are accounted for or 1,400-2,600 Pcal (24-43%) otherwise. Freshwater limitations in some irrigated regions (western United States; China; and West, South, and Central Asia) could necessitate the reversion of 20-60 Mha of cropland from irrigated to rainfed management by end-of-century, and a further loss of 600-2,900 Pcal of food production. In other regions (northern/eastern United States, parts of South America, much of Europe, and South East Asia) surplus water supply could in principle support a net increase in irrigation, although substantial investments in irrigation infrastructure would be required.
Healthy city projects in developing countries: the first evaluation.
Harpham, T; Burton, S; Blue, I
2001-06-01
The 'healthy city' concept has only recently been adopted in developing countries. From 1995 to 1999, the World Health Organization (WHO), Geneva, supported healthy city projects (HCPs) in Cox's Bazar (Bangladesh), Dar es Salaam (Tanzania), Fayoum (Egypt), Managua (Nicaragua) and Quetta (Pakistan). The authors evaluated four of these projects, representing the first major evaluation of HCPs in developing countries. Methods used were stakeholder analysis, workshops, document analysis and interviews with 102 managers/implementers and 103 intended beneficiaries. Municipal health plan development (one of the main components of the healthy city strategy) in these cities was limited, which is a similar finding to evaluations of HCPs in Europe. The main activities selected by the projects were awareness raising and environmental improvements, particularly solid waste disposal. Two of the cities effectively used the 'settings' approach of the healthy city concept, whereby places such as markets and schools are targeted. The evaluation found that stakeholder involvement varied in relation to: (i) the level of knowledge of the project; (ii) the project office location; (iii) the project management structure; and (iv) type of activities (ranging from low stakeholder involvement in capital-intensive infrastructure projects, to high in some settings-type activities). There was evidence to suggest that understanding of environment-health links was increased across stakeholders. There was limited political commitment to the healthy city projects, perhaps due to the fact that most of the municipalities had not requested the projects. Consequently, the projects had little influence on written/expressed municipal policies. Some of the projects mobilized considerable resources, and most projects achieved effective intersectoral collaboration. WHO support enabled the project coordinators to network at national and international levels, and the capacity of these individuals (although not necessarily their institutions) was increased by the project. The average annual running cost of the projects was approximately 132,000 US dollars per city, which is close to the costs of the only other HCP for which a cost analysis has been undertaken, Bangkok (115,000 US dollars per year) Recommendations for these and other HCPs are provided.
Pegasus Workflow Management System: Helping Applications From Earth and Space
NASA Astrophysics Data System (ADS)
Mehta, G.; Deelman, E.; Vahi, K.; Silva, F.
2010-12-01
Pegasus WMS is a Workflow Management System that can manage large-scale scientific workflows across Grid, local and Cloud resources simultaneously. Pegasus WMS provides a means for representing the workflow of an application in an abstract XML form, agnostic of the resources available to run it and the location of data and executables. It then compiles these workflows into concrete plans by querying catalogs and farming computations across local and distributed computing resources, as well as emerging commercial and community cloud environments in an easy and reliable manner. Pegasus WMS optimizes the execution as well as data movement by leveraging existing Grid and cloud technologies via a flexible pluggable interface and provides advanced features like reusing existing data, automatic cleanup of generated data, and recursive workflows with deferred planning. It also captures all the provenance of the workflow from the planning stage to the execution of the generated data, helping scientists to accurately measure performance metrics of their workflow as well as data reproducibility issues. Pegasus WMS was initially developed as part of the GriPhyN project to support large-scale high-energy physics and astrophysics experiments. Direct funding from the NSF enabled support for a wide variety of applications from diverse domains including earthquake simulation, bacterial RNA studies, helioseismology and ocean modeling. Earthquake Simulation: Pegasus WMS was recently used in a large scale production run in 2009 by the Southern California Earthquake Centre to run 192 million loosely coupled tasks and about 2000 tightly coupled MPI style tasks on National Cyber infrastructure for generating a probabilistic seismic hazard map of the Southern California region. SCEC ran 223 workflows over a period of eight weeks, using on average 4,420 cores, with a peak of 14,540 cores. A total of 192 million files were produced totaling about 165TB out of which 11TB of data was saved. Astrophysics: The Laser Interferometer Gravitational-Wave Observatory (LIGO) uses Pegasus WMS to search for binary inspiral gravitational waves. A month of LIGO data requires many thousands of jobs, running for days on hundreds of CPUs on the LIGO Data Grid (LDG) and Open Science Grid (OSG). Ocean Temperature Forecast: Researchers at the Jet Propulsion Laboratory are exploring Pegasus WMS to run ocean forecast ensembles of the California coastal region. These models produce a number of daily forecasts for water temperature, salinity, and other measures. Helioseismology: The Solar Dynamics Observatory (SDO) is NASA's most important solar physics mission of this coming decade. Pegasus WMS is being used to analyze the data from SDO, which will be predominantly used to learn about solar magnetic activity and to probe the internal structure and dynamics of the Sun with helioseismology. Bacterial RNA studies: SIPHT is an application in bacterial genomics, which predicts sRNA (small non-coding RNAs)-encoding genes in bacteria. This project currently provides a web-based interface using Pegasus WMS at the backend to facilitate large-scale execution of the workflows on varied resources and provide better notifications of task/workflow completion.
Projecting shifts in thermal habitat for 686 species on the North American continental shelf
Selden, Rebecca L.; Latour, Robert J.; Frölicher, Thomas L.; Seagraves, Richard J.; Pinsky, Malin L.
2018-01-01
Recent shifts in the geographic distribution of marine species have been linked to shifts in preferred thermal habitats. These shifts in distribution have already posed challenges for living marine resource management, and there is a strong need for projections of how species might be impacted by future changes in ocean temperatures during the 21st century. We modeled thermal habitat for 686 marine species in the Atlantic and Pacific oceans using long-term ecological survey data from the North American continental shelves. These habitat models were coupled to output from sixteen general circulation models that were run under high (RCP 8.5) and low (RCP 2.6) future greenhouse gas emission scenarios over the 21st century to produce 32 possible future outcomes for each species. The models generally agreed on the magnitude and direction of future shifts for some species (448 or 429 under RCP 8.5 and RCP 2.6, respectively), but strongly disagreed for other species (116 or 120 respectively). This allowed us to identify species with more or less robust predictions. Future shifts in species distributions were generally poleward and followed the coastline, but also varied among regions and species. Species from the U.S. and Canadian west coast including the Gulf of Alaska had the highest projected magnitude shifts in distribution, and many species shifted more than 1000 km under the high greenhouse gas emissions scenario. Following a strong mitigation scenario consistent with the Paris Agreement would likely produce substantially smaller shifts and less disruption to marine management efforts. Our projections offer an important tool for identifying species, fisheries, and management efforts that are particularly vulnerable to climate change impacts. PMID:29768423
Projecting shifts in thermal habitat for 686 species on the North American continental shelf.
Morley, James W; Selden, Rebecca L; Latour, Robert J; Frölicher, Thomas L; Seagraves, Richard J; Pinsky, Malin L
2018-01-01
Recent shifts in the geographic distribution of marine species have been linked to shifts in preferred thermal habitats. These shifts in distribution have already posed challenges for living marine resource management, and there is a strong need for projections of how species might be impacted by future changes in ocean temperatures during the 21st century. We modeled thermal habitat for 686 marine species in the Atlantic and Pacific oceans using long-term ecological survey data from the North American continental shelves. These habitat models were coupled to output from sixteen general circulation models that were run under high (RCP 8.5) and low (RCP 2.6) future greenhouse gas emission scenarios over the 21st century to produce 32 possible future outcomes for each species. The models generally agreed on the magnitude and direction of future shifts for some species (448 or 429 under RCP 8.5 and RCP 2.6, respectively), but strongly disagreed for other species (116 or 120 respectively). This allowed us to identify species with more or less robust predictions. Future shifts in species distributions were generally poleward and followed the coastline, but also varied among regions and species. Species from the U.S. and Canadian west coast including the Gulf of Alaska had the highest projected magnitude shifts in distribution, and many species shifted more than 1000 km under the high greenhouse gas emissions scenario. Following a strong mitigation scenario consistent with the Paris Agreement would likely produce substantially smaller shifts and less disruption to marine management efforts. Our projections offer an important tool for identifying species, fisheries, and management efforts that are particularly vulnerable to climate change impacts.
NASA Astrophysics Data System (ADS)
Yu, Jianjun; Berry, Pam
2017-04-01
The drought and heat stress has alerted the composition, structure and biogeography of forests globally, whilst the projected severe and widespread droughts are potentially increasing. This challenges the sustainable forest management to better cope with future climate and maintain the forest ecosystem functions and services. Many studies have investigated the climate change impacts on forest ecosystem but less considered the climate extremes like drought. In this study, we implement a dynamic ecosystem model based on a version of LPJ-GUESS parameterized with European tree species and apply to Great Britain at a finer spatial resolution of 5*5 km. The model runs for the baseline from 1961 to 2011 and projects to the latter 21st century using 100 climate scenarios generated from MaRIUS project to tackle the climate model uncertainty. We will show the potential impacts of climate change on forest ecosystem and vegetation transition in Great Britain by comparing the modelled conditions in the 2030s and the 2080s relative to the baseline. In particular, by analyzing the modelled tree mortality, we will show the tree dieback patterns in response to drought for various species, and assess their drought vulnerability across Great Britain. We also use species distribution modelling to project the suitable climate space for selected tree species using the same climate scenarios. Aided by these two modelling approaches and based on the corresponding modelling results, we will discuss the implications for adaptation strategy for forest management, especially in extreme drought conditions. The gained knowledge and lessons for Great Britain are considered to be transferable in many other regions.
Spromberg, Julann A; Baldwin, David H; Damm, Steven E; McIntyre, Jenifer K; Huff, Michael; Sloan, Catherine A; Anulacion, Bernadita F; Davis, Jay W; Scholz, Nathaniel L
2016-04-01
Adult coho salmon Oncorhynchus kisutch return each autumn to freshwater spawning habitats throughout western North America. The migration coincides with increasing seasonal rainfall, which in turn increases storm water run-off, particularly in urban watersheds with extensive impervious land cover. Previous field assessments in urban stream networks have shown that adult coho are dying prematurely at high rates (>50%). Despite significant management concerns for the long-term conservation of threatened wild coho populations, a causal role for toxic run-off in the mortality syndrome has not been demonstrated.We exposed otherwise healthy coho spawners to: (i) artificial storm water containing mixtures of metals and petroleum hydrocarbons, at or above concentrations previously measured in urban run-off; (ii) undiluted storm water collected from a high traffic volume urban arterial road (i.e. highway run-off); and (iii) highway run-off that was first pre-treated via bioinfiltration through experimental soil columns to remove pollutants.We find that mixtures of metals and petroleum hydrocarbons - conventional toxic constituents in urban storm water - are not sufficient to cause the spawner mortality syndrome. By contrast, untreated highway run-off collected during nine distinct storm events was universally lethal to adult coho relative to unexposed controls. Lastly, the mortality syndrome was prevented when highway run-off was pretreated by soil infiltration, a conventional green storm water infrastructure technology.Our results are the first direct evidence that: (i) toxic run-off is killing adult coho in urban watersheds, and (ii) inexpensive mitigation measures can improve water quality and promote salmon survival. Synthesis and applications . Coho salmon, an iconic species with exceptional economic and cultural significance, are an ecological sentinel for the harmful effects of untreated urban run-off. Wild coho populations cannot withstand the high rates of mortality that are now regularly occurring in urban spawning habitats. Green storm water infrastructure or similar pollution prevention methods should be incorporated to the maximal extent practicable, at the watershed scale, for all future development and redevelopment projects, particularly those involving transportation infrastructure.
Design of the protoDUNE raw data management infrastructure
Fuess, S.; Illingworth, R.; Mengel, M.; ...
2017-10-01
The Deep Underground Neutrino Experiment (DUNE) will employ a set of Liquid Argon Time Projection Chambers (LArTPC) with a total mass of 40 kt as the main components of its Far Detector. In order to validate this technology and characterize the detector performance at full scale, an ambitious experimental program (called “protoDUNE”) has been initiated which includes a test of the large-scale prototypes for the single-phase and dual-phase LArTPC technologies, which will run in a beam at CERN. The total raw data volume that is slated to be collected during the scheduled 3-month beam run is estimated to be inmore » excess of 2.5 PB for each detector. This data volume will require that the protoDUNE experiment carefully design the DAQ, data handling and data quality monitoring systems to be capable of dealing with challenges inherent with peta-scale data management while simultaneously fulfilling the requirements of disseminating the data to a worldwide collaboration and DUNE associated computing sites. Here in this paper, we present our approach to solving these problems by leveraging the design, expertise and components created for the LHC and Intensity Frontier experiments into a unified architecture that is capable of meeting the needs of protoDUNE.« less
Design of the protoDUNE raw data management infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuess, S.; Illingworth, R.; Mengel, M.
The Deep Underground Neutrino Experiment (DUNE) will employ a set of Liquid Argon Time Projection Chambers (LArTPC) with a total mass of 40 kt as the main components of its Far Detector. In order to validate this technology and characterize the detector performance at full scale, an ambitious experimental program (called “protoDUNE”) has been initiated which includes a test of the large-scale prototypes for the single-phase and dual-phase LArTPC technologies, which will run in a beam at CERN. The total raw data volume that is slated to be collected during the scheduled 3-month beam run is estimated to be inmore » excess of 2.5 PB for each detector. This data volume will require that the protoDUNE experiment carefully design the DAQ, data handling and data quality monitoring systems to be capable of dealing with challenges inherent with peta-scale data management while simultaneously fulfilling the requirements of disseminating the data to a worldwide collaboration and DUNE associated computing sites. Here in this paper, we present our approach to solving these problems by leveraging the design, expertise and components created for the LHC and Intensity Frontier experiments into a unified architecture that is capable of meeting the needs of protoDUNE.« less
Service-Learning in the Environmental Sciences for Teaching Sustainability Science
NASA Astrophysics Data System (ADS)
Truebe, S.; Strong, A. L.
2016-12-01
Understanding and developing effective strategies for the use of community-engaged learning (service-learning) approaches in the environmental geosciences is an important research need in curricular and pedagogical innovation for sustainability. In 2015, we designed and implemented a new community-engaged learning practicum course through the Earth Systems Program in the School of Earth, Energy and Environmental Sciences at Stanford University focused on regional open space management and land stewardship. Undergraduate and graduate students partnered with three different regional land trust and environmental stewardship organizations to conduct quarter-long research projects ranging from remote sensing studies of historical land use, to fire ecology, to ranchland management, to volunteer retention strategies. Throughout the course, students reflected on the decision-making processes and stewardship actions of the organizations. Two iterations of the course were run in Winter and Fall 2015. Using coded and analyzed pre- and post-course student surveys from the two course iterations, we evaluate undergraduate and graduate student learning outcomes and changes in perceptions and understanding of sustainability science. We find that engagement with community partners to conduct research projects on a wide variety of aspects of open space management, land management, and environmental stewardship (1) increased an understanding of trade-offs inherent in sustainability and resource management and (2) altered student perceptions of the role of scientific information and research in environmental management and decision-making. Furthermore, students initially conceived of open space as purely ecological/biophysical, but by the end of the course, (3) their understanding was of open space as a coupled human/ecological system. This shift is crucial for student development as sustainability scientists.
NASA Astrophysics Data System (ADS)
Martin, C.; Dye, M. J.; Daniels, M. D.; Keiser, K.; Maskey, M.; Graves, S. J.; Kerkez, B.; Chandrasekar, V.; Vernon, F.
2015-12-01
The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) project tackles the challenges of collecting and disseminating geophysical observational data in real-time, especially for researchers with limited IT budgets and expertise. The CHORDS Portal is a component that allows research teams to easily configure and operate a cloud-based service which can receive data from dispersed instruments, manage a rolling archive of the observations, and serve these data to any client on the Internet. The research group (user) creates a CHORDS portal simply by running a prepackaged "CHORDS appliance" on Amazon Web Services. The user has complete ownership and management of the portal. Computing expenses are typically very small. RESTful protocols are employed for delivering and fetching data from the portal, which means that any system capable of sending an HTTP GET message is capable of accessing the portal. A simple API is defined, making it straightforward for non-experts to integrate a diverse collection of field instruments. Languages with network access libraries, such as Python, sh, Matlab, R, IDL, Ruby and JavaScript (and most others) can retrieve structured data from the portal with just a few lines of code. The user's private portal provides a browser-based system for configuring, managing and monitoring the health of the integrated real-time system. This talk will highlight the design goals, architecture and agile development of the CHORDS Portal. A running portal, with operational data feeds from across the country, will be presented.
GRA prospectus: optimizing design and management of protected areas
Bernknopf, Richard; Halsing, David
2001-01-01
Protected areas comprise one major type of global conservation effort that has been in the form of parks, easements, or conservation concessions. Though protected areas are increasing in number and size throughout tropical ecosystems, there is no systematic method for optimally targeting specific local areas for protection, designing the protected area, and monitoring it, or for guiding follow-up actions to manage it or its surroundings over the long run. Without such a system, conservation projects often cost more than necessary and/or risk protecting ecosystems and biodiversity less efficiently than desired. Correcting these failures requires tools and strategies for improving the placement, design, and long-term management of protected areas. The objective of this project is to develop a set of spatially based analytical tools to improve the selection, design, and management of protected areas. In this project, several conservation concessions will be compared using an economic optimization technique. The forest land use portfolio model is an integrated assessment that measures investment in different land uses in a forest. The case studies of individual tropical ecosystems are developed as forest (land) use and preservation portfolios in a geographic information system (GIS). Conservation concessions involve a private organization purchasing development and resource access rights in a certain area and retiring them. Forests are put into conservation, and those people who would otherwise have benefited from extracting resources or selling the right to do so are compensated. Concessions are legal agreements wherein the exact amount and nature of the compensation result from a negotiated agreement between an agent of the conservation community and the local community. Funds are placed in a trust fund, and annual payments are made to local communities and regional/national governments. The payments are made pending third-party verification that the forest expanse and quality have been maintained.
NASA Astrophysics Data System (ADS)
Seamon, E.; Gessler, P. E.; Flathers, E.; Walden, V. P.
2014-12-01
As climate change and weather variability raise issues regarding agricultural production, agricultural sustainability has become an increasingly important component for farmland management (Fisher, 2005, Akinci, 2013). Yet with changes in soil quality, agricultural practices, weather, topography, land use, and hydrology - accurately modeling such agricultural outcomes has proven difficult (Gassman et al, 2007, Williams et al, 1995). This study examined agricultural sustainability and soil health over a heterogeneous multi-watershed area within the Inland Pacific Northwest of the United States (IPNW) - as part of a five year, USDA funded effort to explore the sustainability of cereal production systems (Regional Approaches to Climate Change for Pacific Northwest Agriculture - award #2011-68002-30191). In particular, crop growth and soil erosion were simulated across a spectrum of variables and time periods - using the CropSyst crop growth model (Stockle et al, 2002) and the Water Erosion Protection Project Model (WEPP - Flanagan and Livingston, 1995), respectively. A preliminary range of historical scenarios were run, using a high-resolution, 4km gridded dataset of surface meteorological variables from 1979-2010 (Abatzoglou, 2012). In addition, Coupled Model Inter-comparison Project (CMIP5) global climate model (GCM) outputs were used as input to run crop growth model and erosion future scenarios (Abatzoglou and Brown, 2011). To facilitate our integrated data analysis efforts, an agricultural sustainability web service architecture (THREDDS/Java/Python based) is under development, to allow for the programmatic uploading, sharing and processing of variable input data, running model simulations, as well as downloading and visualizing output results. The results of this study will assist in better understanding agricultural sustainability and erosion relationships in the IPNW, as well as provide a tangible server-based tool for use by researchers and farmers - for both small scale field examination, or more regionalized scenarios.
NASA Astrophysics Data System (ADS)
Kong, D.; Donnellan, A.; Pierce, M. E.
2012-12-01
QuakeSim is an online computational framework focused on using remotely sensed geodetic imaging data to model and understand earthquakes. With the rise in online social networking over the last decade, many tools and concepts have been developed that are useful to research groups. In particular, QuakeSim is interested in the ability for researchers to post, share, and annotate files generated by modeling tools in order to facilitate collaboration. To accomplish this, features were added to the preexisting QuakeSim site that include single sign-on, automated saving of output from modeling tools, and a personal user space to manage sharing permissions on these saved files. These features implement OpenID and Lightweight Data Access Protocol (LDAP) technologies to manage files across several different servers, including a web server running Drupal and other servers hosting the computational tools themselves.
Integrated Building Management System (IBMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anita Lewis
This project provides a combination of software and services that more easily and cost-effectively help to achieve optimized building performance and energy efficiency. Featuring an open-platform, cloud- hosted application suite and an intuitive user experience, this solution simplifies a traditionally very complex process by collecting data from disparate building systems and creating a single, integrated view of building and system performance. The Fault Detection and Diagnostics algorithms developed within the IBMS have been designed and tested as an integrated component of the control algorithms running the equipment being monitored. The algorithms identify the normal control behaviors of the equipment withoutmore » interfering with the equipment control sequences. The algorithms also work without interfering with any cooperative control sequences operating between different pieces of equipment or building systems. In this manner the FDD algorithms create an integrated building management system.« less
Shope, James B.; Storlazzi, Curt; Hoeke, Ron
2017-01-01
Atoll islands are dynamic features that respond to seasonal alterations in wave conditions and sea level. It is unclear how shoreline wave run-up and erosion patterns along these low elevation islands will respond to projected sea-level rise (SLR) and changes in wave climate over the next century, hindering communities' preparation for the future. To elucidate how these processes may respond to climate change, extreme boreal winter and summer wave conditions under future sea-level rise (SLR) and wave climate scenarios were simulated at two atolls, Wake and Midway, using a shallow-water hydrodynamic model. Nearshore wave conditions were used to compute the potential longshore sediment flux along island shorelines via the CERC empirical formula and wave-driven erosion was calculated as the divergence of the longshore drift; run-up and the locations where the run-up exceed the berm elevation were also determined. SLR is projected to predominantly drive future island morphological change and flooding. Seaward shorelines (i.e., ocean fronted shorelines directly facing incident wave energy) were projected to experience greater erosion and flooding with SLR and in hypothetical scenarios where changes to deep water wave directions were altered, as informed by previous climate change forced Pacific wave modeling efforts. These changes caused nearshore waves to become more shore-normal, increasing wave attack along previously protected shorelines. With SLR, leeward shorelines (i.e., an ocean facing shoreline but sheltered from incident wave energy) became more accretive on windward islands and marginally more erosive along leeward islands. These shorelines became more accretionary and subject to more flooding with nearshore waves becoming more shore-normal. Lagoon shorelines demonstrated the greatest SLR-driven increase in erosion and run-up. They exhibited the greatest relative change with increasing wave heights where both erosion and run-up magnitudes increased. Wider reef flat-fronted seaward shorelines became more accretive as all oceanographic forcing parameters increased in magnitude and exhibited large run-up increases following increasing wave heights. Island end shorelines became subject to increased flooding, erosion at Wake, and accretion at Midway with SLR. Under future conditions, windward and leeward islands are projected to become thinner as ocean facing and lagoonal shorelines erode, with leeward islands becoming more elongate. Island shorelines will change dramatically over the next century as SLR and altered wave climates drive new erosional regimes. It is vital to the sustainability of island communities that the relative magnitudes of these effects are addressed when planning for projected future climates.
NASA Astrophysics Data System (ADS)
Shope, James B.; Storlazzi, Curt D.; Hoeke, Ron K.
2017-10-01
Atoll islands are dynamic features that respond to seasonal alterations in wave conditions and sea level. It is unclear how shoreline wave run-up and erosion patterns along these low elevation islands will respond to projected sea-level rise (SLR) and changes in wave climate over the next century, hindering communities' preparation for the future. To elucidate how these processes may respond to climate change, extreme boreal winter and summer wave conditions under future sea-level rise (SLR) and wave climate scenarios were simulated at two atolls, Wake and Midway, using a shallow-water hydrodynamic model. Nearshore wave conditions were used to compute the potential longshore sediment flux along island shorelines via the CERC empirical formula and wave-driven erosion was calculated as the divergence of the longshore drift; run-up and the locations where the run-up exceed the berm elevation were also determined. SLR is projected to predominantly drive future island morphological change and flooding. Seaward shorelines (i.e., ocean fronted shorelines directly facing incident wave energy) were projected to experience greater erosion and flooding with SLR and in hypothetical scenarios where changes to deep water wave directions were altered, as informed by previous climate change forced Pacific wave modeling efforts. These changes caused nearshore waves to become more shore-normal, increasing wave attack along previously protected shorelines. With SLR, leeward shorelines (i.e., an ocean facing shoreline but sheltered from incident wave energy) became more accretive on windward islands and marginally more erosive along leeward islands. These shorelines became more accretionary and subject to more flooding with nearshore waves becoming more shore-normal. Lagoon shorelines demonstrated the greatest SLR-driven increase in erosion and run-up. They exhibited the greatest relative change with increasing wave heights where both erosion and run-up magnitudes increased. Wider reef flat-fronted seaward shorelines became more accretive as all oceanographic forcing parameters increased in magnitude and exhibited large run-up increases following increasing wave heights. Island end shorelines became subject to increased flooding, erosion at Wake, and accretion at Midway with SLR. Under future conditions, windward and leeward islands are projected to become thinner as ocean facing and lagoonal shorelines erode, with leeward islands becoming more elongate. Island shorelines will change dramatically over the next century as SLR and altered wave climates drive new erosional regimes. It is vital to the sustainability of island communities that the relative magnitudes of these effects are addressed when planning for projected future climates.
Web-based multimedia courseware for emergency cardiac patient management simulations.
Ambrosiadou, V; Compton, T; Panchal, T; Polovina, S
2000-01-01
This is a multidisciplinary inter-departmental/faculty project between the departments of computer science, electronic, communications and electrical engineering and nursing and paramedic sciences. The objective is to develop a web based multimedia front end to existing simulations of cardiac emergency scenaria. It will be used firstly in the teaching of nurses. The University of Hertfordshire is the only University in Britain using simulations of cardiac emergency scenaria for nurse and paramedic science education and therefore this project will add the multimedia dimension in distributed courses over the web and will assess the improvement in the educational process. The use of network and multimedia technologies, provide interactive learning, immediate feedback to students' responses, individually tailored instructions, objective testing and entertaining delivery. The end product of this project will serve as interactive material to enhance experiential learning for nursing students using the simulations of cardiac emergency scenaria. The emergency treatment simulations have been developed using VisSim and may be compiled as C code. The objective of the project is to provide a web based user friendly multimedia interface in order to demonstrate the way in which patients may be managed in critical situations by applying advanced technological equipment and drug administration. Then the user will be able to better appreciate the concepts involved by running the VisSim simulations. The evaluation group for the proposed software will be the Department of Nursing and Paramedic Sciences About 200 nurses use simulations every year for training purposes as part of their course requirements.
Operational Oceanograhy System for Oil Spill Risk Management at Santander Bay (Spain)
NASA Astrophysics Data System (ADS)
Castanedo Bárcena, S.; Nuñez, P.; Perez-Diaz, B.; Abascal, A.; Cardenas, M.; Medina, R.
2016-02-01
Estuaries and bays are sheltered areas that usually host a wide range of industry and interests (e.g. aquaculture, fishing, recreation, habitat protection). Oil spill risk assessment in these environments is fundamental given the reduced response time associated to this very local scale. This work presents a system comprising two modules: (1) an Operational Oceanography System (OOS) based on nesting high resolution models which provides short-term (within 48 hours) oil spill trajectory forecasting and (2) an oil spill risk assessment system (OSRAS) that estimates risk as the combination of hazard and vulnerability. Hazard is defined as the probability of the coast to be polluted by an oil spill and is calculated on the basis of a library of pre-run cases. The OOS is made up by: (1) Daily boundary conditions (sea level, ocean currents, salinity and temperature) and meteorological forcing are obtained from the European network MYOCEAN and from the Spanish met office, AEMET, respectively; (2) COAWST modelling system is the engine of the OOS (at this stage of the project only ROMS is on); (3) an oil spill transport and fate model, TESEO (4) a web service that manages the operational system and allows the user to run hypothetical as well as real oil spill trajectories using the daily forecast of wind and high resolution ocean variables carried out by COAWST. Regarding the OSRAS system, the main contributions of this work are: (1) the use of extensive meteorological and oceanographic database provided by state-of-the-art ocean and atmospheric models, (2) the use of clustering techniques to establish representative met-ocean scenarios (i.e. combination of sea state, meteorological conditions, tide and river flow), (3) dynamic downscaling of the met-ocean scenarios with COAWST modelling system and (4) management of hundreds of runs performed with the state-of-the-art oil spill transport model TESEO.
Unit 1, downstream from Laurel Run Johnstown Local Flood ...
Unit 1, downstream from Laurel Run - Johnstown Local Flood Protection Project, Beginning on Conemaugh River approx 3.8 miles downstream from confluence of Little Conemaugh & Stony Creek Rivers at Johnstown, Johnstown, Cambria County, PA
FixO3 project results, legacy and module migration to EMSO
NASA Astrophysics Data System (ADS)
Lampitt, Richard
2017-04-01
The fixed point open ocean observatory network (FixO3) project is an international project aimed at integrating in a single network all fixed point open ocean observatories operated by European organisations and to harmonise and coordinate technological, procedural and data management across the stations. The project is running for four years since September 2013 with 29 partners across Europe and a budget of 7M Euros and is now coming to its final phase. In contrast to several past programmes, the opportunity has arisen to ensure that many of the project achievements can migrate into the newly formed European Multidisciplinary Seafloor and water column Observatory (EMSO) research infrastructure. The final phase of the project will focus on developing a strategy to transfer the results in an efficient way to maintain their relevance and maximise their use. In this presentation, we will highlight the significant achievements of FixO3 over the past three years focussing on the modules which will be transferred to EMSO in the coming 9 months. These include: 1. Handbook of best practices for operating fixed point observatories 2. Metadata catalogue 3. Earth Virtual Observatory (EarthVO) for data visualisation and comparison 4. Open Ocean Observatory Yellow Pages (O3YP) 5. Training material for hardware, data and data products used
NASA Technical Reports Server (NTRS)
Dhaliwal, Swarn S.
1997-01-01
An investigation was undertaken to build the software foundation for the WHERE (Web-based Hyper-text Environment for Requirements Engineering) project. The TCM (Toolkit for Conceptual Modeling) was chosen as the foundation software for the WHERE project which aims to provide an environment for facilitating collaboration among geographically distributed people involved in the Requirements Engineering process. The TCM is a collection of diagram and table editors and has been implemented in the C++ programming language. The C++ implementation of the TCM was translated into Java in order to allow the editors to be used for building various functionality of the WHERE project; the WHERE project intends to use the Web as its communication back- bone. One of the limitations of the translated software (TcmJava), which militated against its use in the WHERE project, was persistent data management mechanisms which it inherited from the original TCM; it was designed to be used in standalone applications. Before TcmJava editors could be used as a part of the multi-user, geographically distributed applications of the WHERE project, a persistent storage mechanism must be built which would allow data communication over the Internet, using the capabilities of the Web. An approach involving features of Java, CORBA (Common Object Request Broker), the Web, a middle-ware (Java Relational Binding (JRB)), and a database server was used to build the persistent data management infrastructure for the WHERE project. The developed infrastructure allows a TcmJava editor to be downloaded and run from a network host by using a JDK 1.1 (Java Developer's Kit) compatible Web-browser. The aforementioned editor establishes connection with a server by using the ORB (Object Request Broker) software and stores/retrieves data in/from the server. The server consists of a CORBA object or objects depending upon whether the data is to be made persistent on a single server or multiple servers. The CORBA object providing the persistent data server is implemented using the Java progranu-ning language. It uses the JRB to store/retrieve data in/from a relational database server. The persistent data management system provides transaction and user management facilities which allow multi-user, distributed access to the stored data in a secure manner.
NASA Astrophysics Data System (ADS)
Nijssen, B.; Chiao, T. H.; Lettenmaier, D. P.; Vano, J. A.
2016-12-01
Hydrologic models with varying complexities and structures are commonly used to evaluate the impact of climate change on future hydrology. While the uncertainties in future climate projections are well documented, uncertainties in streamflow projections associated with hydrologic model structure and parameter estimation have received less attention. In this study, we implemented and calibrated three hydrologic models (the Distributed Hydrology Soil Vegetation Model (DHSVM), the Precipitation-Runoff Modeling System (PRMS), and the Variable Infiltration Capacity model (VIC)) for the Bull Run watershed in northern Oregon using consistent data sources and best practice calibration protocols. The project was part of a Piloting Utility Modeling Applications (PUMA) project with the Portland Water Bureau (PWB) under the umbrella of the Water Utility Climate Alliance (WUCA). Ultimately PWB would use the model evaluation to select a model to perform in-house climate change analysis for Bull Run Watershed. This presentation focuses on the experimental design of the comparison project, project findings and the collaboration between the team at the University of Washington and at PWB. After calibration, the three models showed similar capability to reproduce seasonal and inter-annual variations in streamflow, but differed in their ability to capture extreme events. Furthermore, the annual and seasonal hydrologic sensitivities to changes in climate forcings differed among models, potentially attributable to different model representations of snow and vegetation processes.
2018-03-22
generators by not running them as often and reducing wet-stacking. Force Projection: If the IPDs of the microgrid replace, but don’t add to, the number...decrease generator run time, reduce fuel consumption, enable silent operation, and provide power redundancy for military applications. Important...it requires some failsafe features – run out of water, drive out of the sun. o Integration was a challenge; series of valves to run this experiment
Deelman, E.; Callaghan, S.; Field, E.; Francoeur, H.; Graves, R.; Gupta, N.; Gupta, V.; Jordan, T.H.; Kesselman, C.; Maechling, P.; Mehringer, J.; Mehta, G.; Okaya, D.; Vahi, K.; Zhao, L.
2006-01-01
This paper discusses the process of building an environment where large-scale, complex, scientific analysis can be scheduled onto a heterogeneous collection of computational and storage resources. The example application is the Southern California Earthquake Center (SCEC) CyberShake project, an analysis designed to compute probabilistic seismic hazard curves for sites in the Los Angeles area. We explain which software tools were used to build to the system, describe their functionality and interactions. We show the results of running the CyberShake analysis that included over 250,000 jobs using resources available through SCEC and the TeraGrid. ?? 2006 IEEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kowalkowski, Jim; Lyon, Adam; Paterno, Marc
Over the past few years, container technology has become increasingly promising as a means to seamlessly make our software available across a wider range of platforms. In December 2015, we decided to put together a set of docker images that serve as a demonstration of this container technology for managing a run-time environment for art-related software projects, and also serve as a set of test cases for evaluation of performance. Docker[1] containers provide a way to “wrap up a piece of software in a complete filesystem that contains everything it needs to run”. In combination with Shifter[2], such containers providemore » a way to run software developed and deployed on “typical” HEP platforms (such as SLF 6, in common use at Fermilab and on OSG platforms) on HPC facilities at NERSC. Docker containers provide a means of delivering software that can be run on a variety of hosts without needing to be compiled specially for each OS to be supported. This could substantially reduce the effort required to create and validate a new release, since one build could be suitable for use on both grid machines (both FermiGrid and OSG) as well as any machine capable of running the Docker container. In addition, docker containers may provide for a quick and easy way for users to install and use a software release in a standardized environment. This report contains the results and status of this demonstration and evaluation.« less
Integration of Panda Workload Management System with supercomputers
NASA Astrophysics Data System (ADS)
De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.
2016-09-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
NASA Astrophysics Data System (ADS)
Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.
2016-10-01
The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
2nd Generation QUATARA Flight Computer Project
NASA Technical Reports Server (NTRS)
Falker, Jay; Keys, Andrew; Fraticelli, Jose Molina; Capo-Iugo, Pedro; Peeples, Steven
2015-01-01
Single core flight computer boards have been designed, developed, and tested (DD&T) to be flown in small satellites for the last few years. In this project, a prototype flight computer will be designed as a distributed multi-core system containing four microprocessors running code in parallel. This flight computer will be capable of performing multiple computationally intensive tasks such as processing digital and/or analog data, controlling actuator systems, managing cameras, operating robotic manipulators and transmitting/receiving from/to a ground station. In addition, this flight computer will be designed to be fault tolerant by creating both a robust physical hardware connection and by using a software voting scheme to determine the processor's performance. This voting scheme will leverage on the work done for the Space Launch System (SLS) flight software. The prototype flight computer will be constructed with Commercial Off-The-Shelf (COTS) components which are estimated to survive for two years in a low-Earth orbit.
Kennedy Center Director Opens NASA 2017 Robotic Mining Competition
2017-05-23
NASA’s Eighth Annual Robotic Mining Competition (RMC) officially kicked off at NASA’s Kennedy Space Center in Florida on Tuesday, May 23, with Kennedy Director, Bob Cabana, presiding at the annual event’s opening ceremony. Forty-five teams of college undergraduate and graduate students prepped the unique mining robots they designed and built, then conducted practice runs in their quest against the clock to collect and move the most simulated Martian soil. The actual competition is scheduled for Wednesday through Friday. Managed by, and held annually at Kennedy Space Center, RMC is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and math (STEM) fields by expanding opportunities for student research and design. The project provides a competitive environment to foster innovative ideas and solutions with potential use on NASA’s deep space exploration missions, including to Mars.
Hosting and pulishing astronomical data in SQL databases
NASA Astrophysics Data System (ADS)
Galkin, Anastasia; Klar, Jochen; Riebe, Kristin; Matokevic, Gal; Enke, Harry
2017-04-01
In astronomy, terabytes and petabytes of data are produced by ground instruments, satellite missions and simulations. At Leibniz-Institute for Astrophysics Potsdam (AIP) we host and publish terabytes of cosmological simulation and observational data. The public archive at AIP has now reached a size of 60TB and growing and helps to produce numerous scientific papers. The web framework Daiquiri offers a dedicated web interface for each of the hosted scientific databases. Scientists all around the world run SQL queries which include specific astrophysical functions and get their desired data in reasonable time. Daiquiri supports the scientific projects by offering a number of administration tools such as database and user management, contact messages to the staff and support for organization of meetings and workshops. The webpages can be customized and the Wordpress integration supports the participating scientists in maintaining the documentation and the projects' news sections.
Twitter classification model: the ABC of two million fitness tweets.
Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej
2013-09-01
The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment.
Plancton: an opportunistic distributed computing project based on Docker containers
NASA Astrophysics Data System (ADS)
Concas, Matteo; Berzano, Dario; Bagnasco, Stefano; Lusso, Stefano; Masera, Massimo; Puccio, Maximiliano; Vallero, Sara
2017-10-01
The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns. In this work we describe the development and setup of a virtual computing cluster based on Docker containers used as worker nodes. The facility is based on Plancton: a lightweight fire-and-forget background service. Plancton spawns and controls a local pool of Docker containers on a host with free resources, by constantly monitoring its CPU utilisation. It is designed to release the resources allocated opportunistically, whenever another demanding task is run by the host user, according to configurable policies. This is attained by killing a number of running containers. One of the advantages of a thin virtualization layer such as Linux containers is that they can be started almost instantly upon request. We will show how fast the start-up and disposal of containers eventually enables us to implement an opportunistic cluster based on Plancton daemons without a central control node, where the spawned Docker containers behave as job pilots. Finally, we will show how Plancton was configured to run up to 10 000 concurrent opportunistic jobs on the ALICE High-Level Trigger facility, by giving a considerable advantage in terms of management compared to virtual machines.
Evolutionary pattern of improved 1-mile running performance.
Foster, Carl; de Koning, Jos J; Thiel, Christian
2014-07-01
The official world records (WR) for the 1-mile run for men (3:43.13) and for women (4:12.58) have improved 12.2% and 32.3%, respectively, since the first WR recognized by the International Association of Athletics Federations. Previous observations have suggested that the pacing pattern for successive laps is characteristically faster-slower-slowest-faster. However, modeling studies have suggested that uneven energy-output distribution, particularly a high velocity at the end of the race, is essentially wasted kinetic energy that could have been used to finish sooner. Here the authors report that further analysis of the pacing pattern in 32 men's WR races is characterized by a progressive reduction in the within-lap variation of pace, suggesting that improving the WR in the 1-mile run is as much about how energetic resources are managed as about the capacity of the athletes performing the race. In the women's WR races, the pattern of lap times has changed little, probably secondary to a lack of depth in the women's fields. Contemporary WR performances have been achieved a coefficient of variation of lap times on the order of 1.5-3.0%. Reasonable projection suggests that the WR is overdue for improving and may require lap times with a coefficient of variation of ~1%.
The Prodiguer Messaging Platform
NASA Astrophysics Data System (ADS)
Greenslade, Mark; Denvil, Sebastien; Raciazek, Jerome; Carenton, Nicolas; Levavasseur, Guillame
2014-05-01
CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output (data and meta-data) are just some of the complexities that CONVERGENCE aims to resolve. The Institut Pierre Simon Laplace (IPSL) is responsible for running climate simulations upon a set of heterogenous HPC environments within France. With heterogeneity comes added complexity in terms of simulation instrumentation and control. Obtaining a global perspective upon the state of all simulations running upon all HPC environments has hitherto been problematic. In this presentation we detail how, within the context of CONVERGENCE, the implementation of the Prodiguer messaging platform resolves complexity and permits the development of real-time applications such as: 1. a simulation monitoring dashboard; 2. a simulation metrics visualizer; 3. an automated simulation runtime notifier; 4. an automated output data & meta-data publishing pipeline; The Prodiguer messaging platform leverages a widely used open source message broker software called RabbitMQ. RabbitMQ itself implements the Advanced Message Queue Protocol (AMPQ). Hence it will be demonstrated that the Prodiguer messaging platform is built upon both open source and open standards.
Health Monitor for Multitasking, Safety-Critical, Real-Time Software
NASA Technical Reports Server (NTRS)
Zoerner, Roger
2011-01-01
Health Manager can detect Bad Health prior to a failure occurring by periodically monitoring the application software by looking for code corruption errors, and sanity-checking each critical data value prior to use. A processor s memory can fail and corrupt the software, or the software can accidentally write to the wrong address and overwrite the executing software. This innovation will continuously calculate a checksum of the software load to detect corrupted code. This will allow a system to detect a failure before it happens. This innovation monitors each software task (thread) so that if any task reports "bad health," or does not report to the Health Manager, the system is declared bad. The Health Manager reports overall system health to the outside world by outputting a square wave signal. If the square wave stops, this indicates that system health is bad or hung and cannot report. Either way, "bad health" can be detected, whether caused by an error, corrupted data, or a hung processor. A separate Health Monitor Task is started and run periodically in a loop that starts and stops pending on a semaphore. Each monitored task registers with the Health Manager, which maintains a count for the task. The registering task must indicate if it will run more or less often than the Health Manager. If the task runs more often than the Health Manager, the monitored task calls a health function that increments the count and verifies it did not go over max-count. When the periodic Health Manager runs, it verifies that the count did not go over the max-count and zeroes it. If the task runs less often than the Health Manager, the periodic Health Manager will increment the count. The monitored task zeroes the count, and both the Health Manager and monitored task verify that the count did not go over the max-count.
Cirrus Parcel Model Comparison Project. Phase 1
NASA Technical Reports Server (NTRS)
Lin, Ruei-Fong; Starr, David O'C.; DeMott, Paul J.; Cotton, Richard; Jensen, Eric; Sassen, Kenneth
2000-01-01
The Cirrus Parcel Model Comparison (CPMC) is a project of the GEWEX Cloud System Study Working Group on Cirrus Cloud Systems (GCSS WG2). The primary goal of this project is to identify cirrus model sensitivities to the state of our knowledge of nucleation and microphysics. Furthermore, the common ground of the findings may provide guidelines for models with simpler cirrus microphysics modules. We focus on the nucleation regimes of the warm (parcel starting at -40 C and 340 hPa) and cold (-60 C and 170 hPa) cases studied in the GCSS WG2 Idealized Cirrus Model Comparison Project. Nucleation and ice crystal growth were forced through an externally imposed rate of lift and consequent adiabatic cooling. The background haze particles are assumed to be lognormally-distributed H2SO4 particles. Only the homogeneous nucleation mode is allowed to form ice crystals in the HN-ONLY runs; all nucleation modes are switched on in the ALL-MODE runs. Participants were asked to run the HN-lambda-fixed runs by setting lambda = 2 (lambda is further discussed in section 2) or tailoring the nucleation rate calculation in agreement with lambda = 2 (exp 1). The depth of parcel lift (800 m) was set to assure that parcels underwent complete transition through the nucleation regime to a stage of approximate equilibrium between ice mass growth and vapor supplied by the specified updrafts.
SHIWA Services for Workflow Creation and Sharing in Hydrometeorolog
NASA Astrophysics Data System (ADS)
Terstyanszky, Gabor; Kiss, Tamas; Kacsuk, Peter; Sipos, Gergely
2014-05-01
Researchers want to run scientific experiments on Distributed Computing Infrastructures (DCI) to access large pools of resources and services. To run these experiments requires specific expertise that they may not have. Workflows can hide resources and services as a virtualisation layer providing a user interface that researchers can use. There are many scientific workflow systems but they are not interoperable. To learn a workflow system and create workflows may require significant efforts. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows developed in other workflow systems. To overcome it requires creating workflow interoperability solutions to allow workflow sharing. The FP7 'Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs' (SHIWA) project developed the Coarse-Grained Interoperability concept (CGI). It enables recycling and sharing workflows of different workflow systems and executing them on different DCIs. SHIWA developed the SHIWA Simulation Platform (SSP) to implement the CGI concept integrating three major components: the SHIWA Science Gateway, the workflow engines supported by the CGI concept and DCI resources where workflows are executed. The science gateway contains a portal, a submission service, a workflow repository and a proxy server to support the whole workflow life-cycle. The SHIWA Portal allows workflow creation, configuration, execution and monitoring through a Graphical User Interface using the WS-PGRADE workflow system as the host workflow system. The SHIWA Repository stores the formal description of workflows and workflow engines plus executables and data needed to execute them. It offers a wide-range of browse and search operations. To support non-native workflow execution the SHIWA Submission Service imports the workflow and workflow engine from the SHIWA Repository. This service either invokes locally or remotely pre-deployed workflow engines or submits workflow engines with the workflow to local or remote resources to execute workflows. The SHIWA Proxy Server manages certificates needed to execute the workflows on different DCIs. Currently SSP supports sharing of ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflows. Further workflow systems can be added to the simulation platform as required by research communities. The FP7 'Building a European Research Community through Interoperable Workflows and Data' (ER-flow) project disseminates the achievements of the SHIWA project to build workflow user communities across Europe. ER-flow provides application supports to research communities within (Astrophysics, Computational Chemistry, Heliophysics and Life Sciences) and beyond (Hydrometeorology and Seismology) to develop, share and run workflows through the simulation platform. The simulation platform supports four usage scenarios: creating and publishing workflows in the repository, searching and selecting workflows in the repository, executing non-native workflows and creating and running meta-workflows. The presentation will outline the CGI concept, the SHIWA Simulation Platform, the ER-flow usage scenarios and how the Hydrometeorology research community runs simulations on SSP.
Gershengorn, Hayley B; Kocher, Robert; Factor, Phillip
2014-03-01
The success of quality-improvement projects relies heavily on both project design and the metrics chosen to assess change. In Part II of this three-part American Thoracic Society Seminars series, we begin by describing methods for determining which data to collect, tools for data presentation, and strategies for data dissemination. As Avedis Donabedian detailed a half century ago, defining metrics in healthcare can be challenging; algorithmic determination of the best type of metric (outcome, process, or structure) can help intensive care unit (ICU) managers begin this process. Choosing appropriate graphical data displays (e.g., run charts) can prompt discussions about and promote quality improvement. Similarly, dashboards/scorecards are useful in presenting performance improvement data either publicly or privately in a visually appealing manner. To have compelling data to show, ICU managers must plan quality-improvement projects well. The second portion of this review details four quality-improvement tools-checklists, Six Sigma methodology, lean thinking, and Kaizen. Checklists have become commonplace in many ICUs to improve care quality; thinking about how to maximize their effectiveness is now of prime importance. Six Sigma methodology, lean thinking, and Kaizen are techniques that use multidisciplinary teams to organize thinking about process improvement, formalize change strategies, actualize initiatives, and measure progress. None originated within healthcare, but each has been used in the hospital environment with success. To conclude this part of the series, we demonstrate how to use these tools through an example of improving the timely administration of antibiotics to patients with sepsis.
MESA: Message-Based System Analysis Using Runtime Verification
NASA Technical Reports Server (NTRS)
Shafiei, Nastaran; Tkachuk, Oksana; Mehlitz, Peter
2017-01-01
In this paper, we present a novel approach and framework for run-time verication of large, safety critical messaging systems. This work was motivated by verifying the System Wide Information Management (SWIM) project of the Federal Aviation Administration (FAA). SWIM provides live air traffic, site and weather data streams for the whole National Airspace System (NAS), which can easily amount to several hundred messages per second. Such safety critical systems cannot be instrumented, therefore, verification and monitoring has to happen using a nonintrusive approach, by connecting to a variety of network interfaces. Due to a large number of potential properties to check, the verification framework needs to support efficient formulation of properties with a suitable Domain Specific Language (DSL). Our approach is to utilize a distributed system that is geared towards connectivity and scalability and interface it at the message queue level to a powerful verification engine. We implemented our approach in the tool called MESA: Message-Based System Analysis, which leverages the open source projects RACE (Runtime for Airspace Concept Evaluation) and TraceContract. RACE is a platform for instantiating and running highly concurrent and distributed systems and enables connectivity to SWIM and scalability. TraceContract is a runtime verication tool that allows for checking traces against properties specified in a powerful DSL. We applied our approach to verify a SWIM service against several requirements.We found errors such as duplicate and out-of-order messages.
Semantic-Web Architecture for Electronic Discharge Summary Based on OWL 2.0 Standard.
Tahmasebian, Shahram; Langarizadeh, Mostafa; Ghazisaeidi, Marjan; Safdari, Reza
2016-06-01
Patients' electronic medical record contains all information related to treatment processes during hospitalization. One of the most important documents in this record is the record summary. In this document, summary of the whole treatment process is presented which is used for subsequent treatments and other issues pertaining to the treatment. Using suitable architecture for this document, apart from the aforementioned points we can use it in other fields such as data mining or decision making based on the cases. In this study, at first, a model for patient's medical record summary has been suggested using semantic web-based architecture. Then, based on service-oriented architecture and using Java programming language, a software solution was designed and run in a way to generate medical record summary with this structure and at the end, new uses of this structure was explained. in this study a structure for medical record summaries along with corrective points within semantic web has been offered and a software running within Java along with special ontologies are provided. After discussing the project with the experts of medical/health data management and medical informatics as well as clinical experts, it became clear that suggested design for medical record summary apart from covering many issues currently faced in the medical records has also many advantages including its uses in research projects, decision making based on the cases etc.
Unit 3, STA. 158+ 40 RB, Hinckson Run culvertdetail ...
Unit 3, STA. 158+ 40 RB, Hinckson Run culvert-detail - Johnstown Local Flood Protection Project, Beginning on Conemaugh River approx 3.8 miles downstream from confluence of Little Conemaugh & Stony Creek Rivers at Johnstown, Johnstown, Cambria County, PA
78 FR 76903 - Lockhart Power Company, Inc.; Notice of Availability of Draft Environmental Assessment
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-19
..., and enhancement measure Riverdale LLC Riverdale Development Venture, LLC ROR run-of-river ROW rights.... Lockhart Power would operate the project using a combination of run-of-river (ROR) and peaking modes...
The CDF Run II disk inventory manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul Hubbard and Stephan Lammel
2001-11-02
The Collider Detector at Fermilab (CDF) experiment records and analyses proton-antiproton interactions at a center-of-mass energy of 2 TeV. Run II of the Fermilab Tevatron started in April of this year. The duration of the run is expected to be over two years. One of the main data handling strategies of CDF for Run II is to hide all tape access from the user and to facilitate sharing of data and thus disk space. A disk inventory manager was designed and developed over the past years to keep track of the data on disk, to coordinate user access to themore » data, and to stage data back from tape to disk as needed. The CDF Run II disk inventory manager consists of a server process, a user and administrator command line interfaces, and a library with the routines of the client API. Data are managed in filesets which are groups of one or more files. The system keeps track of user access to the filesets and attempts to keep frequently accessed data on disk. Data that are not on disk are automatically staged back from tape as needed. For CDF the main staging method is based on the mt-tools package as tapes are written according to the ANSI standard.« less
NASA Astrophysics Data System (ADS)
Steiger, Damian S.; Haener, Thomas; Troyer, Matthias
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.
JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.
Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J
2010-04-01
The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.
Street-running LRT may not affect a neighbour's sleep
NASA Astrophysics Data System (ADS)
Sarkar, S. K.; Wang, J.-N.
2003-10-01
A comprehensive dynamic finite difference model and analysis was conducted simulating LRT running at the speed of 24 km/h on a city street. The analysis predicted ground borne vibration (GBV) to remain at or below the FTA criterion of a RMS velocity of 72 VdB (0.004 in/s) at the nearest residence. In the model, site-specific stratography and dynamic soil and rock properties were used that were determined from in situ testing. The dynamic input load from LRT vehicle running at 24 km/h was computed from actual measured data from Portland, Oregon's West Side LRT project, which used a low floor vehicle similar to the one proposed for the NJ Transit project. During initial trial runs of the LRT system, vibration and noise measurements were taken at three street locations while the vehicles were running at about the 20-24 km/h operating speed. The measurements confirmed the predictions and satisfied FTA criteria for noise and vibration for frequent events. This paper presents the analytical model, GBV predictions, site measurement data and comparison with FTA criterion.
Structural development and web service based sensitivity analysis of the Biome-BGC MuSo model
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Balogh, János; Churkina, Galina; Haszpra, László; Horváth, Ferenc; Ittzés, Péter; Ittzés, Dóra; Ma, Shaoxiu; Nagy, Zoltán; Pintér, Krisztina; Barcza, Zoltán
2014-05-01
Studying the greenhouse gas exchange, mainly the carbon dioxide sink and source character of ecosystems is still a highly relevant research topic in biogeochemistry. During the past few years research focused on managed ecosystems, because human intervention has an important role in the formation of the land surface through agricultural management, land use change, and other practices. In spite of considerable developments current biogeochemical models still have uncertainties to adequately quantify greenhouse gas exchange processes of managed ecosystem. Therefore, it is an important task to develop and test process-based biogeochemical models. Biome-BGC is a widely used, popular biogeochemical model that simulates the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems. Biome-BGC was originally developed by the Numerical Terradynamic Simulation Group (NTSG) of University of Montana (http://www.ntsg.umt.edu/project/biome-bgc), and several other researchers used and modified it in the past. Our research group developed Biome-BGC version 4.1.1 to improve essentially the ability of the model to simulate carbon and water cycle in real managed ecosystems. The modifications included structural improvements of the model (e.g., implementation of multilayer soil module and drought related plant senescence; improved model phenology). Beside these improvements management modules and annually varying options were introduced and implemented (simulate mowing, grazing, planting, harvest, ploughing, application of fertilizers, forest thinning). Dynamic (annually varying) whole plant mortality was also enabled in the model to support more realistic simulation of forest stand development and natural disturbances. In the most recent model version separate pools have been defined for fruit. The model version which contains every former and new development is referred as Biome-BGC MuSo (Biome-BGC with multi-soil layer). Within the frame of the BioVeL project (http://www.biovel.eu) an open source and domain independent scientific workflow management system (http://www.taverna.org.uk) are used to support 'in silico' experimentation and easy applicability of different models including Biome-BGC MuSo. Workflows can be built upon functionally linked sets of web services like retrieval of meteorological dataset and other parameters; preparation of single run or spatial run model simulation; desk top grid technology based Monte Carlo experiment with parallel processing; model sensitivity analysis, etc. The newly developed, Monte Carlo experiment based sensitivity analysis is described in this study and results are presented about differences in the sensitivity of the original and the developed Biome-BGC model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pentz, David L.; Stoll, Ralph H.; Greeves, John T.
2012-07-01
The PRISM (Prioritization Risk Integration Simulation Model), a computer model was developed to support the Department of Energy's Office of Environmental Management (DOE-EM) in its mission to clean up the environmental legacy from the Nation's nuclear weapons materials production complex. PRISM provides a comprehensive, fully integrated planning tool that can tie together DOE-EM's projects. It is designed to help DOE managers develop sound, risk-informed business practices and defend program decisions. It provides a better ability to understand and manage programmatic risks. The underlying concept for PRISM is that DOE-EM 'owns' a portfolio of environmental legacy obligations (ELOs), and that itsmore » mission is to transform the ELOs from their current conditions to acceptable conditions, in the most effective way possible. There are many types of ELOs - - contaminated soils and groundwater plumes, disused facilities awaiting D and D, and various types of wastes waiting for processing or disposal. For a given suite of planned activities, PRISM simulates the outcomes as they play out over time, allowing for all key identified uncertainties and risk factors. Each contaminated building, land area and waste stream is tracked from cradle to grave, and all of the linkages affecting different waste streams are captured. The progression of the activities is fully dynamic, reflecting DOE-EM's prioritization approaches, precedence requirements, available funding, and the consequences of risks and uncertainties. The top level of PRISM is the end-user interface that allows rapid evaluation of alternative scenarios and viewing the results in a variety of useful ways. PRISM is a fully probabilistic model, allowing the user to specify uncertainties in input data (such as the magnitude of an existing groundwater plume, or the total cost to complete a planned activity) as well as specific risk events that might occur. PRISM is based on the GoldSim software that is widely used for risk and performance assessment calculations. PRISM can be run in a deterministic mode, which quickly provides an estimate of the most likely results of a given plan. Alternatively, the model can be run probabilistically in a Monte Carlo mode, exploring the risks and uncertainties in the system and producing probability distributions for the different performance measures. The PRISM model demonstrates how EM can evaluate a portfolio of ELOs, and transform the ELOs from their current conditions to acceptable conditions, utilizing different strategic approaches. There are many types of ELOs - contaminated soils and groundwater plumes, disused facilities awaiting D and D, and various types of wastes waiting for processing or disposal. This scope of work for the PRISM process and the development of a dynamic simulation model are a logical extension of the GoldSim simulation software used by the OCRWM to assess the long-term performance for the Yucca Mountain Project and by NNSA to assess project risk at its sites. Systems integration modeling will promote better understanding of all project risks, technical and nontechnical, and more defensible decision-making for complex projects with significant uncertainties. It can provide effective visual communication and rapid adaptation during interactions with stakeholders (Administration, Congress, State, Local, and NGO). It will also allow rapid assessment of alternative management approaches. (authors)« less
ERIC Educational Resources Information Center
Jones, Sally Ann
2011-01-01
In this article, I report on a small project involving the use of guided reading groups, levelled texts and running records in a multilingual primary school in Singapore. I focus on running records and ask whether their use is suitable pedagogically and practically for the Singaporean context. The analysis of 22 records of primary one and primary…
Morpheus Alhat Tether Test Preparations
2014-03-27
CAPE CANAVERAL, Fla. – Engineers and technicians prepare the Project Morpheus prototype lander for a tether test near a new launch site at the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. The launch pad was moved to a different location at the landing facility to support the next phase of flight testing. Project Morpheus tests NASA’s automated landing and hazard avoidance technology, or ALHAT, and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Ben Smegelsky
2013-12-10
CAPE CANAVERAL, Fla. – Preparations are underway to prepare the Project Morpheus prototype lander for its first free flight test at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to asteroids and other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Kim Shiflett
2013-12-17
CAPE CANAVERAL, Fla. -- A technician prepares the Project Morpheus prototype lander for a second free flight test at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Dimitri Gerondidakis
Morpheus Alhat Tether Test Preparations
2014-03-27
CAPE CANAVERAL, Fla. – NASA's Project Morpheus prototype lander is positioned near a new launch site at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida for a tether test. The launch pad was moved to a different location at the landing facility to support the next phase of flight testing. Project Morpheus tests NASA’s automated landing and hazard avoidance technology, or ALHAT, and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Ben Smegelsky
2013-12-17
CAPE CANAVERAL, Fla. -- Preparations are underway to prepare the Project Morpheus prototype lander for a second free flight test at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Dimitri Gerondidakis
2013-12-17
CAPE CANAVERAL, Fla. -- Engineers and technicians prepare the Project Morpheus prototype lander for a second free flight test at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Dimitri Gerondidakis
2013-12-10
CAPE CANAVERAL, Fla. – The first free flight of the Project Morpheus prototype lander begins as the lander’s engine fires at the north of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to asteroids and other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Kim Shiflett
2013-12-10
CAPE CANAVERAL, Fla. – Preparations are underway to prepare the Project Morpheus prototype lander for its first free flight test at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to asteroids and other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Kim Shiflett
2013-12-17
CAPE CANAVERAL, Fla. -- A technician prepares the Project Morpheus prototype lander for a second free flight test at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Dimitri Gerondidakis
2013-12-10
CAPE CANAVERAL, Fla. – The first free flight of the Project Morpheus prototype lander begins as the lander’s engine fires at the north of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to asteroids and other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Kim Shiflett
2013-12-17
CAPE CANAVERAL, Fla. -- Preparations are underway to prepare the Project Morpheus prototype lander for a second free flight test at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Dimitri Gerondidakis
2013-12-10
CAPE CANAVERAL, Fla. – The first free flight of the Project Morpheus prototype lander begins as the lander’s engine fires at the north of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to asteroids and other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Kim Shiflett
2013-12-10
CAPE CANAVERAL, Fla. – Technicians and engineers prepare the Project Morpheus prototype lander for its first free flight test at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to asteroids and other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Kim Shiflett
Morpheus Alhat Tether Test Preparations
2014-03-27
CAPE CANAVERAL, Fla. – A technician prepares the Project Morpheus prototype lander for a tether test near a new launch site at the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. The launch pad was moved to a different location at the landing facility to support the next phase of flight testing. Project Morpheus tests NASA’s automated landing and hazard avoidance technology, or ALHAT, and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Ben Smegelsky
Unit 3, STA. 158+40 RB, Hinckson Run culvert context ...
Unit 3, STA. 158+40 RB, Hinckson Run culvert context - Johnstown Local Flood Protection Project, Beginning on Conemaugh River approx 3.8 miles downstream from confluence of Little Conemaugh & Stony Creek Rivers at Johnstown, Johnstown, Cambria County, PA
1990-08-01
Oncorhynchus mychiss Sea-run in Nisqually and tributaries Kokanee (landlocked Managed species American Lake sockeye salmon), Oncorhynchus nerka Chum...plan (Directorate of Engineering and Housing 1984). In addition, the kokanee ( Oncorhynchus nerka ) occurs in American Lake where it is managed and...Migratory Maintained in lakes Oncorhynchus clarki Sea-run in Nisqually and tributaries Rainbow trout (steelhead) Managed species Landlocked in lakes
ERIC Educational Resources Information Center
Skilton, Paul F.; Forsyth, David; White, Otis J.
2008-01-01
Building from research on learning in workplace project teams, the authors work forward from the idea that the principal condition enabling integration learning in student team projects is project complexity. Recognizing the challenges of developing and running complex student projects, the authors extend theory to propose that the experience of…
Statistical evaluation of PACSTAT random number generation capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, G.F.; Toland, M.R.; Harty, H.
1988-05-01
This report summarizes the work performed in verifying the general purpose Monte Carlo driver-program PACSTAT. The main objective of the work was to verify the performance of PACSTAT's random number generation capabilities. Secondary objectives were to document (using controlled configuration management procedures) changes made in PACSTAT at Pacific Northwest Laboratory, and to assure that PACSTAT input and output files satisfy quality assurance traceability constraints. Upon receipt of the PRIME version of the PACSTAT code from the Basalt Waste Isolation Project, Pacific Northwest Laboratory staff converted the code to run on Digital Equipment Corporation (DEC) VAXs. The modifications to PACSTAT weremore » implemented using the WITNESS configuration management system, with the modifications themselves intended to make the code as portable as possible. Certain modifications were made to make the PACSTAT input and output files conform to quality assurance traceability constraints. 10 refs., 17 figs., 6 tabs.« less
Software development infrastructure for the HYBRID modeling and simulation project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, Aaron S.; Kinoshita, Robert A.; Kim, Jong Suk
One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the wholemore » problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers involved in the project. Thirdly, to exchange documents quickly, a SharePoint directory has been set-up. SharePoint allows teams and organizations to intelligently share, and collaborate on content from anywhere.« less
Haux, Reinhold; Hein, Andreas; Kolb, Gerald; Künemund, Harald; Eichelberg, Marco; Appell, Jens-E; Appelrath, H-Jürgen; Bartsch, Christian; Bauer, Jürgen M; Becker, Marcus; Bente, Petra; Bitzer, Jörg; Boll, Susanne; Büsching, Felix; Dasenbrock, Lena; Deparade, Riana; Depner, Dominic; Elbers, Katharina; Fachinger, Uwe; Felber, Juliane; Feldwieser, Florian; Forberg, Anne; Gietzelt, Matthias; Goetze, Stefan; Gövercin, Mehmet; Helmer, Axel; Herzke, Tobias; Hesselmann, Tobias; Heuten, Wilko; Huber, Rainer; Hülsken-Giesler, Manfred; Jacobs, Gerold; Kalbe, Elke; Kerling, Arno; Klingeberg, Timo; Költzsch, Yvonne; Lammel-Polchau, Christopher; Ludwig, Wolfram; Marschollek, Michael; Martens, Birger; Meis, Markus; Meyer, Eike Michael; Meyer, Jochen; Meyer Zu Schwabedissen, Hubertus; Moritz, Niko; Müller, Heiko; Nebel, Wolfgang; Neyer, Franz J; Okken, Petra-Karin; Rahe, Julia; Remmers, Hartmut; Rölker-Denker, Lars; Schilling, Meinhard; Schöpke, Birte; Schröder, Jens; Schulze, Gisela C; Schulze, Mareike; Siltmann, Sina; Song, Bianying; Spehr, Jens; Steen, Enno-Edzard; Steinhagen-Thiessen, Elisabeth; Tanschus, Nele-Marie; Tegtbur, Uwe; Thiel, Andreas; Thoben, Wilfried; van Hengel, Peter; Wabnik, Stefan; Wegel, Sandra; Wilken, Olaf; Winkelbach, Simon; Wist, Thorben; Wolf, Klaus-Hendrik; Wolf, Lars; Zokoll-van der Laan, Melanie
2014-01-01
Many societies across the world are confronted with demographic changes, usually related to increased life expectancy and, often, relatively low birth rates. Information and communication technologies (ICT) may contribute to adequately support senior citizens in aging societies with respect to quality of life and quality and efficiency of health care processes. For investigating and for providing answers on whether new information and communication technologies can contribute to keeping, or even improving quality of life, health and self-sufficiency in ageing societies through new ways of living and new forms of care, the Lower Saxony Research Network Design of Environments for Ageing (GAL) had been established as a five years research project, running from 2008 to 2013. Ambient-assisted living (AAL) technologies in personal and home environments were especially important. In this article we report on the GAL project, and present some of its major outcomes after five years of research. We report on major challenges and lessons learned in running and organizing such a large, inter- and multidisciplinary project and discuss GAL in the context of related research projects. With respect to research outcomes, we have, for example, learned new knowledge about multimodal and speech-based human-machine-interaction mechanisms for persons with functional restrictions, and identified new methods and developed new algorithms for identifying activities of daily life and detecting acute events, particularly falls. A total of 79 apartments of senior citizens had been equipped with specific "GAL technology", providing new insights into the use of sensor data for smart homes. Major challenges we had to face were to deal constructively with GAL's highly inter- and multidisciplinary aspects, with respect to research into GAL's application scenarios, shifting from theory and lab experimentation to field tests, and the complexity of organizing and, in our view, successfully managing such a large project. Overall it can be stated that, from our point of view, the GAL research network has been run successfully and has achieved its major research objectives. Since we now know much more on how and where to use AAL technologies for new environments of living and new forms of care, a future focus for research can now be outlined for systematically planned studies, scientifically exploring the benefits of AAL technologies for senior citizens, in particular with respect to quality of life and the quality and efficiency of health care.
Elliott, Joshua; Deryng, Delphine; Müller, Christoph; Frieler, Katja; Konzmann, Markus; Gerten, Dieter; Glotter, Michael; Flörke, Martina; Wada, Yoshihide; Best, Neil; Eisner, Stephanie; Fekete, Balázs M.; Folberth, Christian; Foster, Ian; Gosling, Simon N.; Haddeland, Ingjerd; Khabarov, Nikolay; Ludwig, Fulco; Masaki, Yoshimitsu; Olin, Stefan; Rosenzweig, Cynthia; Ruane, Alex C.; Satoh, Yusuke; Schmid, Erwin; Stacke, Tobias; Tang, Qiuhong; Wisser, Dominik
2014-01-01
We compare ensembles of water supply and demand projections from 10 global hydrological models and six global gridded crop models. These are produced as part of the Inter-Sectoral Impacts Model Intercomparison Project, with coordination from the Agricultural Model Intercomparison and Improvement Project, and driven by outputs of general circulation models run under representative concentration pathway 8.5 as part of the Fifth Coupled Model Intercomparison Project. Models project that direct climate impacts to maize, soybean, wheat, and rice involve losses of 400–1,400 Pcal (8–24% of present-day total) when CO2 fertilization effects are accounted for or 1,400–2,600 Pcal (24–43%) otherwise. Freshwater limitations in some irrigated regions (western United States; China; and West, South, and Central Asia) could necessitate the reversion of 20–60 Mha of cropland from irrigated to rainfed management by end-of-century, and a further loss of 600–2,900 Pcal of food production. In other regions (northern/eastern United States, parts of South America, much of Europe, and South East Asia) surplus water supply could in principle support a net increase in irrigation, although substantial investments in irrigation infrastructure would be required. PMID:24344283
An interactive program for computer-aided map design, display, and query: EMAPKGS2
Pouch, G.W.
1997-01-01
EMAPKGS2 is a user-friendly, PC-based electronic mapping tool for use in hydrogeologic exploration and appraisal. EMAPKGS2 allows the analyst to construct maps interactively from data stored in a relational database, perform point-oriented spatial queries such as locating all wells within a specified radius, perform geographic overlays, and export the data to other programs for further analysis. EMAPKGS2 runs under Microsoft?? Windows??? 3.1 and compatible operating systems. EMAPKGS2 is a public domain program available from the Kansas Geological Survey. EMAPKGS2 is the centerpiece of WHEAT, the Windows-based Hydrogeologic Exploration and Appraisal Toolkit, a suite of user-friendly Microsoft?? Windows??? programs for natural resource exploration and management. The principal goals in development of WHEAT have been ease of use, hardware independence, low cost, and end-user extensibility. WHEAT'S native data format is a Microsoft?? Access?? database. WHEAT stores a feature's geographic coordinates as attributes so they can be accessed easily by the user. The WHEAT programs are designed to be used in conjunction with other Microsoft?? Windows??? software to allow the natural resource scientist to perform work easily and effectively. WHEAT and EMAPKGS have been used at several of Kansas' Groundwater Management Districts and the Kansas Geological Survey on groundwater management operations, groundwater modeling projects, and geologic exploration projects. ?? 1997 Elsevier Science Ltd.
Essential information: Uncertainty and optimal control of Ebola outbreaks
Li, Shou-Li; Bjornstad, Ottar; Ferrari, Matthew J.; Mummah, Riley; Runge, Michael C.; Fonnesbeck, Christopher J.; Tildesley, Michael J.; Probert, William J. M.; Shea, Katriona
2017-01-01
Early resolution of uncertainty during an epidemic outbreak can lead to rapid and efficient decision making, provided that the uncertainty affects prioritization of actions. The wide range in caseload projections for the 2014 Ebola outbreak caused great concern and debate about the utility of models. By coding and running 37 published Ebola models with five candidate interventions, we found that, despite this large variation in caseload projection, the ranking of management options was relatively consistent. Reducing funeral transmission and reducing community transmission were generally ranked as the two best options. Value of information (VoI) analyses show that caseloads could be reduced by 11% by resolving all model-specific uncertainties, with information about model structure accounting for 82% of this reduction and uncertainty about caseload only accounting for 12%. Our study shows that the uncertainty that is of most interest epidemiologically may not be the same as the uncertainty that is most relevant for management. If the goal is to improve management outcomes, then the focus of study should be to identify and resolve those uncertainties that most hinder the choice of an optimal intervention. Our study further shows that simplifying multiple alternative models into a smaller number of relevant groups (here, with shared structure) could streamline the decision-making process and may allow for a better integration of epidemiological modeling and decision making for policy.
Essential information: Uncertainty and optimal control of Ebola outbreaks.
Li, Shou-Li; Bjørnstad, Ottar N; Ferrari, Matthew J; Mummah, Riley; Runge, Michael C; Fonnesbeck, Christopher J; Tildesley, Michael J; Probert, William J M; Shea, Katriona
2017-05-30
Early resolution of uncertainty during an epidemic outbreak can lead to rapid and efficient decision making, provided that the uncertainty affects prioritization of actions. The wide range in caseload projections for the 2014 Ebola outbreak caused great concern and debate about the utility of models. By coding and running 37 published Ebola models with five candidate interventions, we found that, despite this large variation in caseload projection, the ranking of management options was relatively consistent. Reducing funeral transmission and reducing community transmission were generally ranked as the two best options. Value of information (VoI) analyses show that caseloads could be reduced by 11% by resolving all model-specific uncertainties, with information about model structure accounting for 82% of this reduction and uncertainty about caseload only accounting for 12%. Our study shows that the uncertainty that is of most interest epidemiologically may not be the same as the uncertainty that is most relevant for management. If the goal is to improve management outcomes, then the focus of study should be to identify and resolve those uncertainties that most hinder the choice of an optimal intervention. Our study further shows that simplifying multiple alternative models into a smaller number of relevant groups (here, with shared structure) could streamline the decision-making process and may allow for a better integration of epidemiological modeling and decision making for policy.
NASA Astrophysics Data System (ADS)
Casajus, A.; Ciba, K.; Fernandez, V.; Graciani, R.; Hamar, V.; Mendez, V.; Poss, S.; Sapunov, M.; Stagni, F.; Tsaregorodtsev, A.; Ubeda, M.
2012-12-01
The DIRAC Project was initiated to provide a data processing system for the LHCb Experiment at CERN. It provides all the necessary functionality and performance to satisfy the current and projected future requirements of the LHCb Computing Model. A considerable restructuring of the DIRAC software was undertaken in order to turn it into a general purpose framework for building distributed computing systems that can be used by various user communities in High Energy Physics and other scientific application domains. The CLIC and ILC-SID detector projects started to use DIRAC for their data production system. The Belle Collaboration at KEK, Japan, has adopted the Computing Model based on the DIRAC system for its second phase starting in 2015. The CTA Collaboration uses DIRAC for the data analysis tasks. A large number of other experiments are starting to use DIRAC or are evaluating this solution for their data processing tasks. DIRAC services are included as part of the production infrastructure of the GISELA Latin America grid. Similar services are provided for the users of the France-Grilles and IBERGrid National Grid Initiatives in France and Spain respectively. The new communities using DIRAC started to provide important contributions to its functionality. Among recent additions can be mentioned the support of the Amazon EC2 computing resources as well as other Cloud management systems; a versatile File Replica Catalog with File Metadata capabilities; support for running MPI jobs in the pilot based Workload Management System. Integration with existing application Web Portals, like WS-PGRADE, is demonstrated. In this paper we will describe the current status of the DIRAC Project, recent developments of its framework and functionality as well as the status of the rapidly evolving community of the DIRAC users.
NASA Astrophysics Data System (ADS)
Tchiguirinskaia, Ioulia; Gires, Auguste; Vicari, Rosa; Schertzer, Daniel; Maksimovic, Cedo
2013-04-01
The combined effects of climate change and increasing urbanization call for a change of paradigm for planning, maintenance and management of new urban developments and retrofitting of existing ones to maximize ecosystem services and increase resilience to the adverse climate change effects. This presentation will discuss synergies of the EU Climate-KIC Innovation Blue Green Dream (BGD) Project in promoting the BGD demonstration and training sites established in participating European countries. The BGD demonstration and training sites show clear benefits when blue and green infrastructures are considered together. These sites present a unique opportunity for community learning and dissemination. Their development and running acts as a hub for engineers, architects, planners and modellers to come together in their design and implementation stage. This process, being captured in a variety of media, creates a corpus of knowledge, anchored in specific examples of different scales, types and dimensions. During the EU Climate-KIC Innovation Blue Green Dream Project, this corpus of knowledge will be used to develop dissemination and training materials whose content will be customised to fit urgent societal needs.
NASA Astrophysics Data System (ADS)
Meertens, C. M.; Boler, F. M.; Ertz, D. J.; Mencin, D.; Phillips, D.; Baker, S.
2017-12-01
UNAVCO, in its role as a NSF facility for geodetic infrastructure and data, has succeeded for over two decades using on-premises infrastructure, and while the promise of cloud-based infrastructure is well-established, significant questions about suitability of such infrastructure for facility-scale services remain. Primarily through the GeoSciCloud award from NSF EarthCube, UNAVCO is investigating the costs, advantages, and disadvantages of providing its geodetic data and services in the cloud versus using UNAVCO's on-premises infrastructure. (IRIS is a collaborator on the project and is performing its own suite of investigations). In contrast to the 2-3 year time scale for the research cycle, the time scale of operation and planning for NSF facilities is for a minimum of five years and for some services extends to a decade or more. Planning for on-premises infrastructure is deliberate, and migrations typically take months to years to fully implement. Migrations to a cloud environment can only go forward with similar deliberate planning and understanding of all costs and benefits. The EarthCube GeoSciCloud project is intended to address the uncertainties of facility-level operations in the cloud. Investigations are being performed in a commercial cloud environment (Amazon AWS) during the first year of the project and in a private cloud environment (NSF XSEDE resource at the Texas Advanced Computing Center) during the second year. These investigations are expected to illuminate the potential as well as the limitations of running facility scale production services in the cloud. The work includes running parallel equivalent cloud-based services to on premises services and includes: data serving via ftp from a large data store, operation of a metadata database, production scale processing of multiple months of geodetic data, web services delivery of quality checked data and products, large-scale compute services for event post-processing, and serving real time data from a network of 700-plus GPS stations. The evaluation is based on a suite of metrics that we have developed to elucidate the effectiveness of cloud-based services in price, performance, and management. Services are currently running in AWS and evaluation is underway.
Radio Synthesis Imaging - A High Performance Computing and Communications Project
NASA Astrophysics Data System (ADS)
Crutcher, Richard M.
The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) ENHANCED PRUDENTIAL STANDARDS (REGULATION YY) Company-Run Stress Test Requirements for Banking... on the first day of a stress test cycle (on October 1) over which the relevant projections extend. (k... in the company-run stress tests, including, but not limited to, baseline, adverse, and severely...
Following a series of acid mine drainage (AMD) projects funded largely by EPA’s Clean Water Act Section 319 non-point source program, the pH level in Aaron Run is meeting Maryland’s water quality standard – and the brook trout are back.
An Enhanced Convective Forecast (ECF) for the New York TRACON Area
NASA Technical Reports Server (NTRS)
Wheeler, Mark; Stobie, James; Gillen, Robert; Jedlovec, Gary; Sims, Danny
2008-01-01
In an effort to relieve summer-time congestion in the NY Terminal Radar Approach Control (TRACON) area, the FAA is testing an enhanced convective forecast (ECF) product. The test began in June 2008 and is scheduled to run through early September. The ECF is updated every two hours, right before the Air Traffic Control System Command Center (ATCSCC) national planning telcon. It is intended to be used by traffic managers throughout the National Airspace System (NAS) and airlines dispatchers to supplement information from the Collaborative Convective Forecast Product (CCFP) and the Corridor Integrated Weather System (CIWS). The ECF begins where the current CIWS forecast ends at 2 hours and extends out to 12 hours. Unlike the CCFP it is a detailed deterministic forecast with no aerial coverage limits. It is created by an ENSCO forecaster using a variety of guidance products including, the Weather Research and Forecast (WRF) model. This is the same version of the WRF that ENSCO runs over the Florida peninsula in support of launch operations at the Kennedy Space Center. For this project, the WRF model domain has been shifted to the Northeastern US. Several products from the NASA SPoRT group are also used by the ENSCO forecaster. In this paper we will provide examples of the ECF products and discuss individual cases of traffic management actions using ECF guidance.
FY17 ASC CSSE L2 Milestone 6018: Power Usage Characteristics of Workloads Running on Trinity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedretti, Kevin
The overall goal of this work was to utilize the Advanced Power Management (APM) capabilities of the ATS-1 Trinity platform to understand the power usage behavior of ASC workloads running on Trinity and gain insight into the potential for utilizing power management techniques on future ASC platforms.
EMSODEV and EPOS-IP: key findings for effective management of EU research infrastructure projects
NASA Astrophysics Data System (ADS)
Materia, Paola; Bozzoli, Sabrina; Beranzoli, Laura; Cocco, Massimo; Favali, Paolo; Freda, Carmela; Sangianantoni, Agata
2017-04-01
EMSO (European Multidisciplinary Seafloor and water-column Observatory, http://www.emso-eu.org) and EPOS (European Plate Observing System, https://www.epos-ip.org) are pan-European Research Infrastructures (RIs) in the ESFRI 2016 Roadmap. EMSO has recently become an ERIC (European Research Infrastructure Consortium), whilst EPOS application is in progress. Both ERICs will be hosted in Italy and the "Representing Entity" is INGV. EMSO consists of oceanic environment observation systems spanning from the Arctic through the Atlantic and Mediterranean, to the Black Sea for long-term, high-resolution, real-time monitoring of natural and man-induced processes such as hazards, climate, and marine ecosystems changes to study their evolution and interconnections. EPOS aims at creating a pan-European infrastructure for solid Earth science to support a safe and sustainable society. EPOS will enable innovative multidisciplinary research for a better understanding of Earth's physical and chemical processes controlling earthquakes, volcanic eruptions, ground instability, tsunami, and all those processes driving tectonics and Earth's surface dynamics. Following the conclusion of their Preparatory Phases the two RIs are now in their Implementation Phase still supported by the EC through the EMSODEV and EPOS-IP projects, both run by dedicated Project Management Offices at INGV with sound experience in EU projects. EMSODEV (H2020 project, 2015-2018) involves 11 partners and 9 associate partners and aims at improving the harmonization among the EMSO ERIC observation systems through the realization of EMSO Generic Instrument Modules (EGIMs), and a Data Management Platform (DMP) to implement interoperability and standardization. The DMP will provide access to data from all EMSO nodes, providing a unified, homogeneous, infrastructure-scale and user-oriented platform integrated with the increased measurement capabilities and functions provided by the EGIMs. EPOS IP (H2020 project, 2015-2019) is a project of 47 partners, 6 associate partners and several international organizations for a total of 25 countries involved. EPOS IP is a key step in EPOS' mission of a pan-European Earth science integrated platform. It will deliver not only a suite of domain-specific and multidisciplinary data and services in one platform, but also the legal, governance and financial frameworks to ensure the infrastructure future operation and sustainability (EPOS ERIC). INGV experience over the years indicates that effective management of EU RIs projects should contain 5 basic elements: 1.Defined life cycle and milestones: Map of phases, deliverables, key milestones and sufficiency criteria for each group involved in the project using project management tools and software. 2.Shared organization, systems, roles: Defined roles for team members and responsibilities for functional managers are crucial. Similarly, a system of communication and team involvement is essential to success. Leadership and interpersonal/organizational skills are also important. 3.Quality assurance: Quality dimension should be aligned to the project objectives and specific criteria should be identified for each phase of the project. 4.Tracking and variance analysis: Regular reports and periodic meetings of the teams are crucial to identify when things are off target. Schedule slips, cost overruns, open issues, new risks and problems must be dealt with as early as possible. 5.Impact assessment by monitoring the achievement of results and socio-economic impact.
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
NASA Astrophysics Data System (ADS)
Sun, Fubao; Roderick, Michael L.; Lim, Wee Ho; Farquhar, Graham D.
2011-12-01
We assess hydroclimatic projections for the Murray-Darling Basin (MDB) using an ensemble of 39 Intergovernmental Panel on Climate Change AR4 climate model runs based on the A1B emissions scenario. The raw model output for precipitation, P, was adjusted using a quantile-based bias correction approach. We found that the projected change, ΔP, between two 30 year periods (2070-2099 less 1970-1999) was little affected by bias correction. The range for ΔP among models was large (˜±150 mm yr-1) with all-model run and all-model ensemble averages (4.9 and -8.1 mm yr-1) near zero, against a background climatological P of ˜500 mm yr-1. We found that the time series of actually observed annual P over the MDB was indistinguishable from that generated by a purely random process. Importantly, nearly all the model runs showed similar behavior. We used these facts to develop a new approach to understanding variability in projections of ΔP. By plotting ΔP versus the variance of the time series, we could easily identify model runs with projections for ΔP that were beyond the bounds expected from purely random variations. For the MDB, we anticipate that a purely random process could lead to differences of ±57 mm yr-1 (95% confidence) between successive 30 year periods. This is equivalent to ±11% of the climatological P and translates into variations in runoff of around ±29%. This sets a baseline for gauging modeled and/or observed changes.
FixO3: Advancement towards Open Ocean Observatory Data Management Harmonisation
NASA Astrophysics Data System (ADS)
Behnken, Andree; Pagnani, Maureen; Huber, Robert; Lampitt, Richard
2015-04-01
Since 2002 there has been a sustained effort, supported as European framework projects, to harmonise both the technology and the data management of Open Ocean fixed observatories run by European nations. FixO3 started in September 2013, and for 3 more years will coordinate the convergence of data management best practice across a constellation of moorings in the Atlantic, in both hemispheres, and in the Mediterranean. To ensure the continued existence of these unique sources of oceanographic data as sustained observatories it is vital to improve access to the data collected, both in terms of methods of presentation, real-time availability, long-term archiving and quality assurance. The data management component of FixO3 improves access to marine observatory data by harmonising data management standards, formats and workflows covering the complete life cycle of data from real time data acquisition to long-term archiving. Legal and data policy aspects have been examined and discussed to identify transnational barriers to open-access to marine observatory data. As a result, a harmonised FixO3 data policy was drafted, which provides a formal basis for data exchange between FixO3 infrastructures, and also enables open access to data for the general public. FixO3 interacts with other European infrastructures such as EMODnet, SeaDataNet, PANGAEA, and especially aims to harmonise efforts with OceanSites and MyOcean. The project landing page (www.fixo3.eu) offers detailed information about every observatory as well as data visualisations and direct downloads. In addition to this, metadata for all FixO3 - relevant data are available from the searchable FixO3 metadata catalogue, which is also accessible from the project web page. This catalogue is hosted by PANGAEA and receives updates in regular intervals. The FixO3 Standards & Services registry ties in with the GEOSS Components and Services Registry (CSR) and provides additional observatory information. The data management efforts are central to FixO3. As a result of the procedural and technological harmonisation efforts undertaken in the project, the FixO3 network of observatories is accumulating unique, quality controlled data sets that will develop into a legacy repository of openly accessible oceanographic data.
NASA Astrophysics Data System (ADS)
Vaquero, C.; López de Ipiña, J.; Galarza, N.; Hargreaves, B.; Weager, B.; Breen, C.
2011-07-01
New developments based on nanotechnology have to guarantee safe products and processes to be accepted by society. The Polyfire project will develop and scale-up techniques for processing halogen-free, fire-retardant nanocomposite materials and coatings based on unsaturated polyester resins and organoclays. The project includes a work package that will assess the Health and Environmental impacts derived from the manipulation of nanoparticles. This work package includes the following tasks: (1) Identification of Health and Environment Impacts derived from the processes, (2) Experimentation to study specific Nanoparticle Emissions, (3) Development of a Risk Management Methodology for the process, and (4) A Comparison of the Health and Environmental Impact of New and Existing Materials. To date, potential exposure scenarios to nanomaterials have been identified through the development of a Preliminary Hazard Analysis (PHA) of the new production processes. In the next step, these scenarios will be studied and simulated to evaluate potential emissions of nanomaterials. Polyfire is a collaborative European project, funded by the European Commission 7th Framework Programme (Grant Agreement No 229220). It features 11 partners from 5 countries (5 SMEs, 3 research institutes, 2 large companies, 1 association) and runs for three years (1st September 2009 - 31st August 2012). This project is an example of an industrial research development which aims to introduce to the market new products promoting the safe use of nanomaterials.
Operating a petabyte class archive at ESO
NASA Astrophysics Data System (ADS)
Suchar, Dieter; Lockhart, John S.; Burrows, Andrew
2008-07-01
The challenges of setting up and operating a Petabyte Class Archive will be described in terms of computer systems within a complex Data Centre environment. The computer systems, including the ESO Primary and Secondary Archive and the associated computational environments such as relational databases will be explained. This encompasses the entire system project cycle, including the technical specifications, procurement process, equipment installation and all further operational phases. The ESO Data Centre construction and the complexity of managing the environment will be presented. Many factors had to be considered during the construction phase, such as power consumption, targeted cooling and the accumulated load on the building structure to enable the smooth running of a Petabyte class Archive.
Hedrick, Lara B.; Welsh, Stuart A.; Anderson, James T.
2009-01-01
Impacts of highway construction on streams in the central Appalachians are a growing concern as new roads are created to promote tourism and economic development in the area. Alterations to the streambed of a first-order stream, Sauerkraut Run, Hardy County, WV, during construction of a highway overpass included placement and removal of a temporary culvert, straightening and regrading of a section of stream channel, and armourment of a bank with a reinforced gravel berm. We surveyed longitudinal profiles and cross sections in a reference reach and the altered reach of Sauerkraut Run from 2003 through 2007 to measure physical changes in the streambed. During the four-year period, three high-flow events changed the streambed downstream of construction including channel widening and aggradation and then degradation of the streambed. Upstream of construction, at a reinforced gravel berm, bank erosion was documented. The reference section remained relatively unchanged. Knowledge gained by documenting channel changes in response to natural and anthropogenic variables can be useful for managers and engineers involved in highway construction projects.
UNIVERSITY OF KANSAS SMART GRID DEMONSTRATION PROJECT
The University of Kansas (KU) EcoHawks Design Project began in 2008 with the conversion of a discarded 1974 Volkswagen Super Beetle into a fuel neutral series hybrid running on 100% biodiesel created from waste vegetable oil. This project continued in year two through upgradi...
Urban infrastructure and longitudinal stream profiles
NASA Astrophysics Data System (ADS)
Lindner, G. A.; Miller, A. J.
2009-12-01
Urban streams usually are highly engineered or modified by human activity and are conventionally thought of as being geometrically, and thus hydraulically, simple. The work presented here, a contribution to NSF CNH Project 0709659, is designed to capture the influence of urban infrastructure on the character of longitudinal profiles and flow hydraulics along streams in the Baltimore metropolitan area. Detailed topographic data sets are derived from LiDAR supplemented by total-station surveys of the channel bed and low-flow water surface. These in turn are used to drive 2D depth-averaged hydraulic models comparing flow conditions over a range of urban development patterns and stormwater management regimes. Results from stream surveys of 1-2 km length indicate that channels in older, highly urbanized areas typically have straight planforms and strongly stepped profiles characterized by a series of deep, stagnant pools with short intervening riffles or runs. This pattern is associated with frequent interruption of the channel profile by bridges, culverts, road embankments and other artificial structures. In one survey reach of the Dead Run watershed, 50 percent of cumulative channel length has zero gradient at low flow, and 50 percent of cumulative head loss is accounted for by only 4 percent of channel length. In the suburban Red Run watershed recent development has occurred under strict stormwater management regulations with minimal encroachment on the riparian zone. Although their average gradients are similar, the Red Run survey reach is steeper than the Dead Run reach over most its length but has a smaller fraction of total head loss caused by local slope breaks. Modeling results indicate that these differences in stream morphology are associated with differences in velocity, flow pattern, and residence time at base flow; the stepped nature of the profile in the older urban area becomes less pronounced at intermediate to high flows, but the controlling influence of infrastructure may become dominant again during large floods. Because flashy urban streams have lower and more persistent low flows as well as more extreme flood flows, these hydraulic patterns may have implications for both biogeochemical cycling at base flow and transport and deposition of sediment and other constituents during flood periods. Continuing research will develop a typology of urban streams in terms of the influence of engineering practices on flow patterns and material transport.
High Resolution Nature Runs and the Big Data Challenge
NASA Technical Reports Server (NTRS)
Webster, W. Phillip; Duffy, Daniel Q.
2015-01-01
NASA's Global Modeling and Assimilation Office at Goddard Space Flight Center is undertaking a series of very computationally intensive Nature Runs and a downscaled reanalysis. The nature runs use the GEOS-5 as an Atmospheric General Circulation Model (AGCM) while the reanalysis uses the GEOS-5 in Data Assimilation mode. This paper will present computational challenges from three runs, two of which are AGCM and one is downscaled reanalysis using the full DAS. The nature runs will be completed at two surface grid resolutions, 7 and 3 kilometers and 72 vertical levels. The 7 km run spanned 2 years (2005-2006) and produced 4 PB of data while the 3 km run will span one year and generate 4 BP of data. The downscaled reanalysis (MERRA-II Modern-Era Reanalysis for Research and Applications) will cover 15 years and generate 1 PB of data. Our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS), a specialization of the concept of business process-as-a-service that is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. In this presentation, we will describe two projects that demonstrate this shift. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS. MERRA/AS enables MapReduce analytics over MERRA reanalysis data collection by bringing together the high-performance computing, scalable data management, and a domain-specific climate data services API. NASA's High-Performance Science Cloud (HPSC) is an example of the type of compute-storage fabric required to support CAaaS. The HPSC comprises a high speed Infinib and network, high performance file systems and object storage, and a virtual system environments specific for data intensive, science applications. These technologies are providing a new tier in the data and analytic services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility-driven applications and modes of work. In our experience, CAaaS lowers the barriers and risk to organizational change, fosters innovation and experimentation, and provides the agility required to meet our customers' increasing and changing needs
NASA Astrophysics Data System (ADS)
Twohig, Sarah; Pattison, Ian; Sander, Graham
2017-04-01
Fine sediment poses a significant threat to UK river systems in terms of vegetation, aquatic habitats and morphology. Deposition of fine sediment onto the river bed reduces channel capacity resulting in decreased volume to contain high flow events. Once the in channel problem has been identified managers are under pressure to sustainably mitigate flood risk. With climate change and land use adaptations increasing future pressures on river catchments it is important to consider the connectivity of fine sediment throughout the river catchment and its influence on channel capacity, particularly in systems experiencing long term aggradation. Fine sediment erosion is a continuing concern in the River Eye, Leicestershire. The predominately rural catchment has a history of flooding within the town of Melton Mowbray. Fine sediment from agricultural fields has been identified as a major contributor of sediment delivery into the channel. Current mitigation measures are not sustainable or successful in preventing the continuum of sediment throughout the catchment. Identifying the potential sources and connections of fine sediment would provide insight into targeted catchment management. 'Sensitive Catchment Integrated Modelling Analysis Platforms' (SCIMAP) is a tool often used by UK catchment managers to identify potential sources and routes of sediment within a catchment. SCIMAP is a risk based model that combines hydrological (rainfall) and geomorphic controls (slope, land cover) to identify the risk of fine sediment being transported from source into the channel. A desktop version of SCIMAP was run for the River Eye at a catchment scale using 5m terrain, rainfall and land cover data. A series of SCIMAP model runs were conducted changing individual parameters to determine the sensitivity of the model. Climate Change prediction data for the catchment was used to identify potential areas of future connectivity and erosion risk for catchment managers. The results have been subjected to field validation as part of a wider research project which provides an indication of the robustness of widespread models as effective management tools.
Metal-Organic Vapor Phase Epitaxial Reactor for the Deposition of Infrared Detector Materials
2015-04-09
out during 2013. A set of growth experiments to deposit CdTe and ZnTe thin films on GaAs and Si substrates was carried out to test the system...After several dummy runs, a few growth runs to deposit CdTe and ZnTe, both doped and undoped, were grown on 3-inch diameter Si substrates or part of...to deposit CdTe and ZnTe on Si and GaAs substrates for use in this project. Some layers have been processed to make solar cells. Project 3
NASA Astrophysics Data System (ADS)
Ray, A. J.; Ojima, D. S.; Morisette, J. T.
2012-12-01
The DOI North Central Climate Science Center (NC CSC) and the NOAA/NCAR National Climate Predictions and Projections (NCPP) Platform and have initiated a joint pilot study to collaboratively explore the "best available climate information" to support key land management questions and how to provide this information. NCPP's mission is to support state of the art approaches to develop and deliver comprehensive regional climate information and facilitate its use in decision making and adaptation planning. This presentation will describe the evolving joint pilot as a tangible, real-world demonstration of linkages between climate science, ecosystem science and resource management. Our joint pilot is developing a deliberate, ongoing interaction to prototype how NCPP will work with CSCs to develop and deliver needed climate information products, including translational information to support climate data understanding and use. This pilot also will build capacity in the North Central CSC by working with NCPP to use climate information used as input to ecological modeling. We will discuss lessons to date on developing and delivering needed climate information products based on this strategic partnership. Four projects have been funded to collaborate to incorporate climate information as part of an ecological modeling project, which in turn will address key DOI stakeholder priorities in the region: Riparian Corridors: Projecting climate change effects on cottonwood and willow seed dispersal phenology, flood timing, and seedling recruitment in western riparian forests. Sage Grouse & Habitats: Integrating climate and biological data into land management decision models to assess species and habitat vulnerability Grasslands & Forests: Projecting future effects of land management, natural disturbance, and CO2 on woody encroachment in the Northern Great Plains The value of climate information: Supporting management decisions in the Plains and Prairie Potholes LCC. NCCSC's role in these projects is to provide the connections between climate data and running ecological models, and prototype these for future work. NCPP will develop capacities to provide enhanced climate information at relevant spatial and temporal scales, both for historical climate and projections of future climate, and will work to link expert guidance and understanding of modeling processes and evaluation of modeling with the use of numerical climate data. Translational information thus is a suite of information that aids in translation of numerical climate information into usable knowledge for applications, e.g. ecological response models, hydrologic risk studies. This information includes technical and scientific aspects including, but not limited to: 1) results of objective, quantitative evaluation of climate models & downscaling techniques, 2) guidance on appropriate uses and interpretation, i.e., understanding the advantages and limitations of various downscaling techniques for specific user applications, 3) characterizing and interpreting uncertainty, 4) Descriptions meaningful to applications, e.g. narratives. NCPP believes that translational information is best co-developed between climate scientists and applications scientists, such as the NC-CSC pilot.
2017 ARL Summer Student Program Volume 2: Compendium of Abstracts
2017-12-01
useful for equipping quadrotors with advanced capabilities, such as running deep learning networks. A second purpose of this project is to quantify the...Multiple samples were run in the LEAP 5000-XR generating large data sets (hundreds of millions of ions composing hundreds of cubic nanometers of...produce viable walking and running gaits on the final product. Even further, the monetary and time cost of this increases significantly when working
NASA Astrophysics Data System (ADS)
McNab, A.
2017-10-01
This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.
Organizational capacity needs of consumer-run organizations.
Wituk, Scott; Vu, Chi C; Brown, Louis D; Meissen, Greg
2008-05-01
Consumer-run organizations (CROs) are self-help oriented organizations that are run entirely by consumers (people who use or have used mental health services). The current study utilizes an organizational capacity framework to explore the needs of operating CROs. This framework includes four core capacity areas: (1) technical, (2) management, (3) leadership, and (4) adaptive capacity. An analysis reveals that the greatest organizational needs are related to technical and management capacities. Implications are discussed in terms of strategies and activities that CRO leaders and mental health professionals and administrators can use to strengthen the organizational capacity of CROs in their community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, T.
2014-08-29
Large-scale systems like Sequoia allow running small numbers of very large (1M+ process) jobs, but their resource managers and schedulers do not allow large numbers of small (4, 8, 16, etc.) process jobs to run efficiently. Cram is a tool that allows users to launch many small MPI jobs within one large partition, and to overcome the limitations of current resource management software for large ensembles of jobs.
On the Complexity of Delaying an Adversary’s Project
2005-01-01
interdiction models for such problems and show that the resulting problem com- plexities run the gamut : polynomially solvable, weakly NP-complete, strongly...problems and show that the resulting problem complexities run the gamut : polynomially solvable, weakly NP-complete, strongly NP-complete or NP-hard. We
Twenty eight salmon scientists and policy experts have joined forces in an innovative project to identify ways that, if adopted, likely would restore and sustain wild salmon runs in California, Oregon, Washington, Idaho, and southern British Columbia.
Modelling the effectiveness of grass buffer strips in managing muddy floods under a changing climate
NASA Astrophysics Data System (ADS)
Mullan, Donal; Vandaele, Karel; Boardman, John; Meneely, John; Crossley, Laura H.
2016-10-01
Muddy floods occur when rainfall generates runoff on agricultural land, detaching and transporting sediment into the surrounding natural and built environment. In the Belgian Loess Belt, muddy floods occur regularly and lead to considerable economic costs associated with damage to property and infrastructure. Mitigation measures designed to manage the problem have been tested in a pilot area within Flanders and were found to be cost-effective within three years. This study assesses whether these mitigation measures will remain effective under a changing climate. To test this, the Water Erosion Prediction Project (WEPP) model was used to examine muddy flooding diagnostics (precipitation, runoff, soil loss and sediment yield) for a case study hillslope in Flanders where grass buffer strips are currently used as a mitigation measure. The model was run for present day conditions and then under 33 future site-specific climate scenarios. These future scenarios were generated from three earth system models driven by four representative concentration pathways and downscaled using quantile mapping and the weather generator CLIGEN. Results reveal that under the majority of future scenarios, muddy flooding diagnostics are projected to increase, mostly as a consequence of large scale precipitation events rather than mean changes. The magnitude of muddy flood events for a given return period is also generally projected to increase. These findings indicate that present day mitigation measures may have a reduced capacity to manage muddy flooding given the changes imposed by a warming climate with an enhanced hydrological cycle. Revisions to the design of existing mitigation measures within existing policy frameworks are considered the most effective way to account for the impacts of climate change in future mitigation planning.
ATLAS Distributed Computing Monitoring tools during the LHC Run I
NASA Astrophysics Data System (ADS)
Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration
2014-06-01
This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.
ERIC Educational Resources Information Center
Haigh, Sarah; Bell, Christopher; Ruta, Chris
2017-01-01
This article provides details of a successful educational engineering project run in partnership between a group of ten schools and an international engineering, construction and technical services company. It covers the history and evolution of the project and highlights how the project has significant impact not only on the students involved but…
A Study of Energy Management Systems and its Failure Modes in Smart Grid Power Distribution
NASA Astrophysics Data System (ADS)
Musani, Aatif
The subject of this thesis is distribution level load management using a pricing signal in a smart grid infrastructure. The project relates to energy management in a spe-cialized distribution system known as the Future Renewable Electric Energy Delivery and Management (FREEDM) system. Energy management through demand response is one of the key applications of smart grid. Demand response today is envisioned as a method in which the price could be communicated to the consumers and they may shift their loads from high price periods to the low price periods. The development and deployment of the FREEDM system necessitates controls of energy and power at the point of end use. In this thesis, the main objective is to develop the control model of the Energy Management System (EMS). The energy and power management in the FREEDM system is digitally controlled therefore all signals containing system states are discrete. The EMS is modeled as a discrete closed loop transfer function in the z-domain. A breakdown of power and energy control devices such as EMS components may result in energy con-sumption error. This leads to one of the main focuses of the thesis which is to identify and study component failures of the designed control system. Moreover, H-infinity ro-bust control method is applied to ensure effectiveness of the control architecture. A focus of the study is cyber security attack, specifically bad data detection in price. Test cases are used to illustrate the performance of the EMS control design, the effect of failure modes and the application of robust control technique. The EMS was represented by a linear z-domain model. The transfer function be-tween the pricing signal and the demand response was designed and used as a test bed. EMS potential failure modes were identified and studied. Three bad data detection meth-odologies were implemented and a voting policy was used to declare bad data. The run-ning mean and standard deviation analysis method proves to be the best method to detect bad data. An H-infinity robust control technique was applied for the first time to design discrete EMS controller for the FREEDM system.
A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA
NASA Astrophysics Data System (ADS)
Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.
2015-12-01
Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends, compare multiple reanalysis datasets, and variability.
O'Malley, Kathleen G; Jacobson, Dave P; Kurth, Ryon; Dill, Allen J; Banks, Michael A
2013-01-01
Neutral genetic markers are routinely used to define distinct units within species that warrant discrete management. Human-induced changes to gene flow however may reduce the power of such an approach. We tested the efficiency of adaptive versus neutral genetic markers in differentiating temporally divergent migratory runs of Chinook salmon (Oncorhynchus tshawytscha) amid high gene flow owing to artificial propagation and habitat alteration. We compared seven putative migration timing genes to ten microsatellite loci in delineating three migratory groups of Chinook in the Feather River, CA: offspring of fall-run hatchery broodstock that returned as adults to freshwater in fall (fall run), spring-run offspring that returned in spring (spring run), and fall-run offspring that returned in spring (FRS). We found evidence for significant differentiation between the fall and federally listed threatened spring groups based on divergence at three circadian clock genes (OtsClock1b, OmyFbxw11, and Omy1009UW), but not neutral markers. We thus demonstrate the importance of genetic marker choice in resolving complex life history types. These findings directly impact conservation management strategies and add to previous evidence from Pacific and Atlantic salmon indicating that circadian clock genes influence migration timing. PMID:24478800
NASA Astrophysics Data System (ADS)
Molina-Navarro, Eugenio; Trolle, Dennis; Martínez-Pérez, Silvia; Sastre-Merlín, Antonio; Jeppesen, Erik
2014-02-01
Water scarcity and water pollution constitute a big challenge for water managers in the Mediterranean region today and will exacerbate in a projected future warmer world, making a holistic approach for water resources management at the catchment scale essential. We expanded the Soil and Water Assessment Tool (SWAT) model developed for a small Mediterranean catchment to quantify the potential effects of various climate and land use change scenarios on catchment hydrology as well as the trophic state of a new kind of waterbody, a limno-reservoir (Pareja Limno-reservoir), created for environmental and recreational purposes. We also checked for the possible synergistic effects of changes in climate and land use on water flow and nutrient exports from the catchment. Simulations showed a noticeable impact of climate change in the river flow regime and consequently the water level of the limno-reservoir, especially during summer, complicating the fulfillment of its purposes. Most of the scenarios also predicted a deterioration of trophic conditions in the limno-reservoir. Fertilization and soil erosion were the main factors affecting nitrate and total phosphorus concentrations. Combined climate and land use change scenarios showed noticeable synergistic effects on nutrients exports, relative to running the scenarios individually. While the impact of fertilization on nitrate export is projected to be reduced with warming in most cases, an additional 13% increase in the total phosphorus export is expected in the worst-case combined scenario compared to the sum of individual scenarios. Our model framework may help water managers to assess and manage how these multiple environmental stressors interact and ultimately affect aquatic ecosystems.
The Ophidia framework: toward cloud-based data analytics for climate change
NASA Astrophysics Data System (ADS)
Fiore, Sandro; D'Anca, Alessandro; Elia, Donatello; Mancini, Marco; Mariello, Andrea; Mirto, Maria; Palazzo, Cosimo; Aloisio, Giovanni
2015-04-01
The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate large datasets (data cubes) and array-based primitives to perform data analysis on large arrays of scientific data. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), sea situational awareness (TESSA), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). Two use cases regarding the EU FP7 EUBrazil Cloud Connect and the INTERREG OFIDIA projects will be presented during the talk. In the former case (EUBrazilCC) the Ophidia framework is being extended to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity. In the latter one (OFIDIA) the data analytics framework is being exploited to provide operational support regarding processing chains devoted to fire danger prevention. To tackle the project challenges, data analytics workflows consisting of about 130 operators perform, among the others, parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, import/export of datasets in NetCDF format. Finally, the entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. Moreover, a cloud-based release tested with OpenNebula is also available and running in the private cloud infrastructure of the CMCC Supercomputing Centre.
ART/Ada design project, phase 1: Project plan
NASA Technical Reports Server (NTRS)
Allen, Bradley P.
1988-01-01
The plan and schedule for Phase 1 of the Ada based ESBT Design Research Project is described. The main platform for the project is a DEC Ada compiler on VAX mini-computers and VAXstations running the Virtual Memory System (VMS) operating system. The Ada effort and lines of code are given in tabular form. A chart is given of the entire project life cycle.
ERIC Educational Resources Information Center
Stanistreet, Paul
2008-01-01
The Brighton Unemployed Centre Families Project, a community centre run by the unemployed for the unemployed, unwaged and low-waged, has run periodic creative writing classes for 15 years. The centre's creative writing scheme, Salt and Vinegar, gives centre users an opportunity to write about their lives and to develop their writing skills. The…
77 FR 33439 - Trade Mission to Egypt and Kuwait
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-06
... turbines, steam turbines, wind turbines, blades, and other equipment, as well as development and project... business week runs from Sunday through Thursday. Kuwait City is the capital of Kuwait. The business week runs from Sunday through Thursday. In each city, participants will meet with new business contacts...
MINEBANK RUN PROJECT AS AN APPROACH FOR RESTORING DEGRADED URBAN WATERSHEDS AND RIPARIAN ECOSYSTEMS
Elevated nitrate levels in streams and groundwater pose human and ecological threats. Minebank Run, an urban stream in Baltimore MD, will be restored in 2004/2005 using various techniques including reshaping stream banks to reconnect stream channel to flood plain, stream bank r...
NASA Astrophysics Data System (ADS)
Pierleoni, Arnaldo; Casagrande, Luca; Bellezza, Michele; Casadei, Stefano
2010-05-01
The need for increasingly complex geospatial algorithms dedicated to the management of water resources, the fact that many of them require specific knowledge and the need for dedicated computing machines has led to the necessity of centralizing and sharing all the server applications and the plugins developed. For this purpose, a Web Processing Service (WPS) that can make available to users a range of geospatial analysis algorithms, geostatistics, remote sensing procedures and that can be used simply by providing data and input parameters and download the results has been developed. The core of the system infrastructure is a GRASS GIS, which acts as a computational engine, providing more than 350 forms of analysis and the opportunity to create new and ad hoc procedures. The implementation of the WPS was performed using the software PyWPS written in Python that is easily manageable and configurable. All these instruments are managed by a daemon named "Arcibald" specifically created for the purpose of listing the order of the requests that come from the users. In fact, it may happen that there are already ongoing processes so the system will queue the new ones registering the request and running it only when the previous calculations have been completed. However, individual Geoprocessing have an indicator to assess the resources necessary to implement it, enabling you to run geoprocesses that do not require excessive computing time in parallel. This assessment is also made in relation to the size of the input file provided. The WPS standard defines methods for accessing and running Geoprocessing regardless of the client used, however, the project has been developed specifically for a graphical client to access the resources. The client was built as a plugin for the GIS QGis Software which provides the most common tools for the view and the consultation of geographically referenced data. The tool was tested using the data taken during the bathymetric campaign at the Montedoglio Reservoir on the Tiber River in order to generate a digital model of the reservoir bed. Starting from a text file containing coordinates and the depth of the points (previously statistically treated to remove any inaccuracy), we used the plugin for QGis to connect to the Web service and started the process of cross validation in order to obtain the parameters to be used for interpolation. This makes possible to highlight the morphological variations of the basin of reservoirs due to silting phenomena, therefore to consider the actual capacity of the basin for a proper evaluation of the available water resource. Indeed, this is a critical step for the next phase of management. In this case, since the procedure is very long (order of days), the system automatically choose to send the results via email. Moreover the system, once the procedures invoked end, allows to choose whether to share data and results or to remove all traces of the calculation. This because in some cases data and sensitive information are used and this could violate privacy policies if shared. The entire project is made only with open-source software.
Fantasy Baseball with a Statistical Twist
ERIC Educational Resources Information Center
Koban, Lori; McNelis, Erin
2008-01-01
Fantasy baseball, a game invented in 1980, allows baseball fans to become managers of pretend baseball teams. In most fantasy baseball leagues, participants choose teams consisting of major league players who they believe will do well in five offensive categories (batting average, home runs, runs batted in, stolen bases, and runs scored) or in…
Enquiring Minds: A "Radical" Curriculum Project?
ERIC Educational Resources Information Center
Morgan, John
2011-01-01
This article focuses on Enquiring Minds, a three-year curriculum development project funded by Microsoft as part of its Partners in Learning programme and run by Futurelab. The article suggests that the project is best understood as an example of a new type of "curriculum entrepreneurialism" that is impatient with the traditional…
Interoperability challenges for the Sustainable Management of seagrass meadows (Invited)
NASA Astrophysics Data System (ADS)
Nativi, S.; Pastres, R.; Bigagli, L.; Venier, C.; Zucchetta, M.; Santoro, M.
2013-12-01
Seagrass meadows (marine angiosperm plants) occupy less than 0.2% of the global ocean surface, annually store about 10-18% of the so-called 'Blue Carbon', i.e. the Carbon stored in coastal vegetated areas. Recent literature estimates that the flux to the long-term carbon sink in seagrasses represents 10-20% of seagrasses global average production. Such figures can be translated into economic benefits, taking into account that a ton of carbon dioxide in Europe is paid at around 15 € in the carbon market. This means that the organic carbon retained in seagrass sediments in the Mediterranean is worth 138 - 1128 billion €, which represents 6-23 € per square meter. This is 9-35 times more than one square meter of tropical forest soil (0.66 € per square meter), or 5-17 times when considering both the above and the belowground compartments in tropical forests. According the most conservative estimations, about 10% of the Mediterranean meadows have been lost during the last century. In the framework of the GEOSS (Global Earth Observation System of Systems) initiative, the MEDINA project (funded by the European Commission and coordinated by the University of Ca'Foscari in Venice) prepared a showcase as part of the GEOSS Architecture Interoperability Pilot -phase 6 (AIP-6). This showcase aims at providing a tool for the sustainable management of seagrass meadows along the Mediterranean coastline. The application is based on an interoperability framework providing a set of brokerage services to easily ingest and run a Habitat Suitability model (a model predicting the probability a given site to provide a suitable habitat for the development of seagrass meadow and the average coverage expected). The presentation discusses such a framework explaining how the input data is discovered, accessed and processed to ingest the model (developed in the MEDINA project). Furthermore, the brokerage framework provides the necessary services to run the model and visualize results with a low entry barrier for Scientists.
Building a Snow Data Management System using Open Source Software (and IDL)
NASA Astrophysics Data System (ADS)
Goodale, C. E.; Mattmann, C. A.; Ramirez, P.; Hart, A. F.; Painter, T.; Zimdars, P. A.; Bryant, A.; Brodzik, M.; Skiles, M.; Seidel, F. C.; Rittger, K. E.
2012-12-01
At NASA's Jet Propulsion Laboratory free and open source software is used everyday to support a wide range of projects, from planetary to climate to research and development. In this abstract I will discuss the key role that open source software has played in building a robust science data processing pipeline for snow hydrology research, and how the system is also able to leverage programs written in IDL, making JPL's Snow Data System a hybrid of open source and proprietary software. Main Points: - The Design of the Snow Data System (illustrate how the collection of sub-systems are combined to create a complete data processing pipeline) - Discuss the Challenges of moving from a single algorithm on a laptop, to running 100's of parallel algorithms on a cluster of servers (lesson's learned) - Code changes - Software license related challenges - Storage Requirements - System Evolution (from data archiving, to data processing, to data on a map, to near-real-time products and maps) - Road map for the next 6 months (including how easily we re-used the snowDS code base to support the Airborne Snow Observatory Mission) Software in Use and their Software Licenses: IDL - Used for pre and post processing of data. Licensed under a proprietary software license held by Excelis. Apache OODT - Used for data management and workflow processing. Licensed under the Apache License Version 2. GDAL - Geospatial Data processing library used for data re-projection currently. Licensed under the X/MIT license. GeoServer - WMS Server. Licensed under the General Public License Version 2.0 Leaflet.js - Javascript web mapping library. Licensed under the Berkeley Software Distribution License. Python - Glue code and miscellaneous data processing support. Licensed under the Python Software Foundation License. Perl - Script wrapper for running the SCAG algorithm. Licensed under the General Public License Version 3. PHP - Front-end web application programming. Licensed under the PHP License Version 3.01
Gowda, Anoop; Bangera, Shobith
2017-01-01
Introduction Globally, incidence of Chronic Kidney Disease (CKD) is rapidly rising with huge burden on the life expectancy of the patients. Regular haemodialysis improves the quality of life in these patients. They get treatment at either government run or private sector hospitals. A difference in disease pattern, comorbidity, patient management and number of access failures can be observed in these set ups. Aim The present study was carried out to find out selection, management and disease pattern of CKD patients admitted for dialysis in government run and private hospital. Materials and Methods A cross-sectional study on patients (18–90 years) admitted and undergoing dialysis at government run (N=129) and private hospital (N=182) was undertaken in Karnataka, India. Parameters like comorbidity (diabetes), number of dialysis per week, number of access failures, and follow up visits were compared between these patients. Chi- squared test was used to compare the data. All tests were two-tailed and p< 0.05 was considered as significant. Results More number of younger patients and associated comorbidity, were seen in patients admitted in government run hospital (p<0.001), with no gender bias in selection of patients for dialysis between the two hospitals. Similarly, follow-ups with nephrologist, number of dialysis done per week and erythropoietin supplements administered were significantly more among private hospital patients (p<0.001). Number of dialysis sessions and mean haemoglobin level was less in government run hospital patients, as compared to those in private hospital. No statistical difference was seen with access failure in both these setups. Conclusion No bias in management of CKD patient was seen among the two sets of hospitals though available facilities seemed to vary. PMID:28969180
Gowda, Anoop; Dutt, Aswini Raghavendra; Bangera, Shobith
2017-08-01
Globally, incidence of Chronic Kidney Disease (CKD) is rapidly rising with huge burden on the life expectancy of the patients. Regular haemodialysis improves the quality of life in these patients. They get treatment at either government run or private sector hospitals. A difference in disease pattern, comorbidity, patient management and number of access failures can be observed in these set ups. The present study was carried out to find out selection, management and disease pattern of CKD patients admitted for dialysis in government run and private hospital. A cross-sectional study on patients (18-90 years) admitted and undergoing dialysis at government run (N=129) and private hospital (N=182) was undertaken in Karnataka, India. Parameters like comorbidity (diabetes), number of dialysis per week, number of access failures, and follow up visits were compared between these patients. Chi- squared test was used to compare the data. All tests were two-tailed and p< 0.05 was considered as significant. More number of younger patients and associated comorbidity, were seen in patients admitted in government run hospital (p<0.001), with no gender bias in selection of patients for dialysis between the two hospitals. Similarly, follow-ups with nephrologist, number of dialysis done per week and erythropoietin supplements administered were significantly more among private hospital patients (p<0.001). Number of dialysis sessions and mean haemoglobin level was less in government run hospital patients, as compared to those in private hospital. No statistical difference was seen with access failure in both these setups. No bias in management of CKD patient was seen among the two sets of hospitals though available facilities seemed to vary.
ERIC Educational Resources Information Center
Fickes, Michael
1998-01-01
Examines issues concerning outsourcing student transportation services: cost; management needs and capabilities; goals; and politics. Critical areas of transportation management are highlighted such as personnel management, student management and discipline, risk management, fleet analysis, and routing and scheduling. (GR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1980-05-01
The National Conference of State Legislatures' Small-Scale Hydroelectric Policy Project is designed to assist selected state legislatures in looking at the benefits that a state can derive from the development of small-scale hydro, and in carrying out a review of state laws and regulations that affect the development of the state's small-scale hydro resources. The successful completion of the project should help establish state statutes and regulations that are consistent with the efficient development of small-scale hydro. As part of the project's work with state legislatures, seven case studies of small-scale hydro sites were conducted to provide a general analysismore » and overview of the significant problems and opportunities for the development of this energy resource. The case study approach was selected to expose the actual difficulties and advantages involved in developing a specific site. Such an examination of real development efforts will clearly reveal the important aspects about small-scale hydro development which could be improved by statutory or regulatory revision. Moreover, the case study format enables the formulation of generalized opportunities for promoting small-scale hydro based on specific development experiences. The case study for small-scale hydro power development at the City of Portland's water reserve in the Bull Run Forest is presented with information included on the Bull Run hydro power potential, current water usage, hydro power regulations and plant licensing, technical and economic aspects of Bull Run project, and the environmental impact. (LCL)« less
Halofsky, Joshua S; Halofsky, Jessica E; Burcsu, Theresa; Hemstrom, Miles A
Determining appropriate actions to create or maintain landscapes resilient to climate change is challenging because of uncertainty associated with potential effects of climate change and their interactions with land management. We used a set of climate-informed state-and-transition models to explore the effects of management and natural disturbances on vegetation composition and structure under different future climates. Models were run for dry forests of central Oregon under a fire suppression scenario (i.e., no management other than the continued suppression of wildfires) and an active management scenario characterized by light to moderate thinning from below and some prescribed fire, planting, and salvage logging. Without climate change, area in dry province forest types remained constant. With climate change, dry mixed-conifer forests increased in area (by an average of 21–26% by 2100), and moist mixed-conifer forests decreased in area (by an average of 36–60% by 2100), under both management scenarios. Average area in dry mixed-conifer forests varied little by management scenario, but potential decreases in the moist mixed-conifer forest were lower with active management. With changing climate in the dry province of central Oregon, our results suggest the likelihood of sustaining current levels of dense, moist mixed-conifer forests with large-diameter, old trees is low (less than a 10% chance) irrespective of management scenario; an opposite trend was observed under no climate change simulations. However, results also suggest active management within the dry and moist mixed-conifer forests that creates less dense forest conditions can increase the persistence of larger-diameter, older trees across the landscape. Owing to projected increases in wildfire, our results also suggest future distributions of tree structures will differ from the present. Overall, our projections indicate proactive management can increase forest resilience and sustain some societal values, particularly in drier forest types. However, opportunities to create more disturbance-adapted systems are finite, all values likely cannot be sustained at current levels, and levels of resilience success will likely vary by dry province forest type. Land managers planning for a future without climate change may be assuming a future that is unlikely to exist.
Distributed Virtual System (DIVIRS) Project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1994-01-01
As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, Clifford B.
1995-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
Distributed Virtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
A microkernel design for component-based parallel numerical software systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.
1999-01-13
What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objectsmore » share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.« less
INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Maeno, T
Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
The SNS/HFIR Web Portal System - How Can it Help Me?
NASA Astrophysics Data System (ADS)
Miller, Stephen D.; Geist, Al; Herwig, Kenneth W.; Peterson, Peter F.; Reuter, Michael A.; Ren, Shelly; Bilheux, Jean-Christophe; Campbell, Stuart I.; Kohl, James A.; Vazhkudai, Sudharshan S.; Cobb, John W.; Lynch, Vickie E.; Chen, Meili; Trater, James R.; Smith, Bradford C.; (William Swain, Tom; Huang, Jian; Mikkelson, Ruth; Mikkelson, Dennis; een, Mar K. L. Gr
2010-11-01
In a busy world, continuing with the status-quo, to do things the way we are already familiar, often seems to be the most efficient way to conduct our work. We look for the value-add to decide if investing in a new method is worth the effort. How shall we evaluate if we have reached this tipping point for change? For contemporary researchers, understanding the properties of the data is a good starting point. The new generation of neutron scattering instruments being built are higher resolution and produce one or more orders of magnitude larger data than the previous generation of instruments. For instance, we have grown out of being able to perform some important tasks with our laptops - the data are too big and the computations would simply take too long. These large datasets can be problematic as facility users now begin to grapple with many of the same issues faced by more established computing communities. These issues include data access, management, and movement, data format standards, distributed computing, and collaboration among others. The Neutron Science Portal has been architected, designed, and implemented to provide users with an easy-to-use interface for managing and processing data, while also keeping an eye on meeting modern cybersecurity requirements imposed on institutions. The cost of entry for users has been lowered by utilizing a web interface providing access to backend portal resources. Users can browse or search for data which they are allowed to see, data reduction applications can be run without having to load the software, sample activation calculations can be performed for SNS and HFIR beamlines, McStas simulations can be run on TeraGrid and ORNL computers, and advanced analysis applications such as those being produced by the DANSE project can be run. Behind the scenes is a "live cataloging" system which automatically catalogs and archives experiment data via the data management system, and provides proposal team members access to their experiment data. The complexity of data movement and utilizing distributed computing resources has been taken care on behalf of users. Collaboration is facilitated by providing users a read/writeable common area, shared
Impact of an HIV prevention intervention on condom use among long distance truckers in India.
Juneja, Sachin; Rao Tirumalasetti, Vasudha; Mishra, Ram Manohar; Sethu, Shekhar; Singh, Indra Ramyash
2013-03-01
This paper examines the impact of three components of an HIV prevention program (mid-media, interpersonal communication, and project-run clinics) on consistent condom use by long distance truckers with paid and non-paid female partners in India. Data from 2,723 long distance truckers were analyzed using the propensity score matching approach. Based on utilization of services, the following categories of intervention exposure were derived: no exposure, exposure only to mid-media, exposure only to mid-media and interpersonal communication, exposure only to mid-media and project-run clinics, and exposure to all three intervention components. Compared to those who were not exposed to any intervention, exposure to mid-media alone increased consistent condom use with paid female partners by about ten percent. Exposure to mid-media and visits to project-run clinics increased consistent condom use with non-paid female partners by 26 %. These findings suggest that mid-media events and clinics were the most effective package of services to increase consistent condom use among the long distance truckers.
NASA Technical Reports Server (NTRS)
Jenkins, Kimberly R.
2003-01-01
We were a team of five engineers responsible for the command and data systems used during experiment integration and testing of Spacelab payloads. For the most part, we performed component level testing for the experiments, the first phase of testing for Spacelab Program payloads. In the beginning, the members of my team didn't know what to think of me, but as time went by they realized that I was sincere. A relationship of trust developed. Since then, we've all moved on to other projects, but every now and then we run into each other and the bond that we have is still strong. Every good manager wants to do well by the people working on a project. One way to achieve this is simple: Pay attention to the environment in which your employees work. People warn you not to get mired in the details; they say you might miss the big picture. But sometimes it's the details that give you a better view of what the big picture is all about.
Hernández-Encuentra, Eulàlia; Gómez-Zúñiga, Beni; Guillamón, Noemí; Boixadós, Mercè; Armayones, Manuel
2015-12-01
The purpose of this first part of the APTIC (Patient Organisations and ICT) project is to design and run an online collaborative social network for paediatric patient organizations (PPOs). To analyse the needs of PPOs in Spain to identify opportunities to improve health services through the use of ICT. A convenience sample of staff from 35 PPOs (54.68% response rate) participated in a structured online survey and three focus groups (12 PPOs). Paediatric patient organizations' major needs are to provide accredited and managed information, increase personal support and assistance and promote joint commitment to health care. Moreover, PPOs believe in the Internet's potential to meet their needs and support their activities. Basic limitations to using the Internet are lack of knowledge and resources. The discussion of the data includes key elements of designing an online collaborative social network and reflections on health services provided. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Coudarcher, Rémi; Duculty, Florent; Serot, Jocelyn; Jurie, Frédéric; Derutin, Jean-Pierre; Dhome, Michel
2005-12-01
SKiPPER is a SKeleton-based Parallel Programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.
Continuing and developing the engagement with Mediterranean stakeholders in the CLIM-RUN project
NASA Astrophysics Data System (ADS)
Goodess, Clare
2013-04-01
The CLIM-RUN case studies provide a real-world and Mediterranean context for bringing together experts on the demand and supply side of climate services. They are essential to the CLIM-RUN objective of using iterative and bottom-up (i.e., stakeholder led) approaches for optimizing the two-way information transfer between climate experts and stakeholders - and focus on specific locations and sectors (such as tourism and renewable energy). Stakeholder involvement has been critical from the start of the project in March 2011, with an early series of targeted workshops used to define the framework for each case study as well as the needs of stakeholders. Following these workshops, the user needs were translated into specific requirements from climate observations and models and areas identified where additional modelling and analysis are required. The first set of new products and tools produced by the CLIM-RUN modelling and observational experts are presented in a series of short briefing notes. A second round of CLIM-RUN stakeholder workshops will be held for each of the case studies in Spring 2013 as an essential part of the fourth CLIM-RUN key stage: Consolidation and collective review/assessment. During these workshops the process of interaction between CLIM-RUN scientists and case-study stakeholders will be reviewed, as well as the utility of the products and information developed in CLIM-RUN. Review questions will include: How far have we got? How successful have we been? What are the remaining problems/gaps? How to sustain and extend the interactions? The process of planning for and running these second workshops will be outlined and emerging outcomes presented, focusing on common messages which are relevant for development of the CLIM-RUN protocol for providing improved climate services to stakeholders together with the identification of best practices and policy recommendations for climate services development.
NASA Astrophysics Data System (ADS)
Micheletty, P. D.; Perrot, D.; Day, G. N.; Lhotak, J.; Quebbeman, J.; Park, G. H.; Carney, S.
2017-12-01
Water supply forecasting in the western United States is inextricably linked to snowmelt processes, as approximately 70-85% of total annual runoff comes from water stored in seasonal mountain snowpacks. Snowmelt-generated streamflow is vital to a variety of downstream uses; the Upper Colorado River Basin (UCRB) alone provides water supply for 25 million people, irrigation water for 3.5 million acres, and drives hydropower generation at Lake Powell. April-July water supply forecasts produced by the National Weather Service (NWS) Colorado Basin River Forecast Center (CBRFC) are critical to basin water management. The primary objective of this project as part of the NASA Water Resources Applied Science Program, is to improve water supply forecasting for the UCRB by assimilating satellite and ground snowpack observations into a distributed hydrologic model at various times during the snow accumulation and melt seasons. To do this, we have built a framework that uses an Ensemble Kalman Filter (EnKF) to update modeled snow water equivalent (SWE) states in the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) with spatially interpolated SNOTEL snow water equivalent (SWE) observations and products from the MODIS Snow Covered-Area and Grain size retrieval algorithm (when available). We have generated April-July water supply reforecasts for a 20-year period (1991-2010) for several headwater catchments in the UCRB using HL-RDHM and snow data assimilation in the Ensemble Streamflow Prediction (ESP) framework. The existing CBRFC ESP reforecasts will provide a baseline for comparison to determine whether the data assimilation process adds skill to the water supply forecasts. Preliminary results from one headwater basin show improved skill in water supply forecasting when HL-RDHM is run with the data assimilation step compared to HL-RDHM run without the data assimilation step, particularly in years when MODSCAG data were available (2000-2010). The final forecasting framework developed during this project will be delivered to CBRFC and run operationally for a set of pilot basins.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-27
... Lake Tahoe Passenger Ferry Project, Placer and El Dorado Counties and City of South Lake Tahoe... Statement (EIS) for the proposed Lake Tahoe Passenger Ferry Project. The project consists of a cross- lake ferry service with a South Shore Ferry Terminal at the Ski Run Marina in South Lake Tahoe, El Dorado...
ERIC Educational Resources Information Center
MacKenzie, Jane; Ruxton, Graeme
2006-01-01
Project work represents a significant component of most Bioscience degrees. Conscious that students are not necessarily given adequate preparation for their final year project, we have investigated two core elements in the 3rd year of a 4-year Honours programme. One element, an investigative project on aspects of insect biology, has run for…
Morpheus Alhat Tether Test Preparations
2014-03-27
CAPE CANAVERAL, Fla. – Technicians watch as a crane lowers the Project Morpheus prototype lander onto a launch pad at a new launch site at the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. Preparations are underway for a tether test. The launch pad was moved to a different location at the landing facility to support the next phase of flight testing. Project Morpheus tests NASA’s automated landing and hazard avoidance technology, or ALHAT, and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Ben Smegelsky
Morpheus Alhat Tether Test Preparations
2014-03-27
CAPE CANAVERAL, Fla. – A crane lowers the Project Morpheus prototype lander onto a launch pad at a new launch site at the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. Preparations are underway for a tether test. The launch pad was moved to a different location at the landing facility to support the next phase of flight testing. Project Morpheus tests NASA’s automated landing and hazard avoidance technology, or ALHAT, and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Ben Smegelsky
2013-12-10
CAPE CANAVERAL, Fla. – The first free flight of the Project Morpheus prototype lander begins as the engine fires and the lander lifts off at the north of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to asteroids and other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Kim Shiflett
Morpheus Alhat Tether Test Preparations
2014-03-27
CAPE CANAVERAL, Fla. – Engineers and technicians monitor the progress as a crane lifts the Project Morpheus prototype lander off the ground for a tether test near a new launch site at the north end of the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. The launch pad was moved to a different location at the landing facility to support the next phase of flight testing. Project Morpheus tests NASA’s automated landing and hazard avoidance technology, or ALHAT, and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Ben Smegelsky
Morpheus Campaign 2A Tether Test
2014-03-27
CAPE CANAVERAL, Fla. – NASA's Project Morpheus prototype lander is positioned near a new launch site at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida for a tethered test. The test will be performed to verify the lander's recently installed autonomous landing and hazard avoidance technology, or ALHAT, sensors and integration system. The launch pad was moved to a different location at the landing facility to support the next phase of flight testing. Project Morpheus tests NASA’s ALHAT, and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Glenn Benson
2014-01-21
CAPE CANAVERAL, Fla. – The Project Morpheus prototype lander is transported to a launch pad at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. The prototype lander is being prepared for its fourth free flight test at Kennedy. Morpheus will launch from the ground over a flame trench and then descend and land on a dedicated pad inside the autonomous landing and hazard avoidance technology, or ALHAT, hazard field. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://www.nasa.gov/centers/johnson/exploration/morpheus. Photo credit: NASA/Cory Huston
2013-12-10
CAPE CANAVERAL, Fla. – The first free flight of the Project Morpheus prototype lander begins as the engine fires and the lander begins to lift off at the north of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. Testing of the prototype lander was performed at NASA’s Johnson Space Center in Houston in preparation for tethered and free flight testing at Kennedy. Project Morpheus integrates NASA’s automated landing and hazard avoidance technology, or ALHAT, with an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to asteroids and other planetary surfaces. The landing facility will provide the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov. Photo credit: NASA/Kim Shiflett
Morpheus Alhat Tether Test Preparations
2014-03-27
CAPE CANAVERAL, Fla. – NASA's Project Morpheus prototype lander is positioned near a new launch site at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida for a tether test. The launch pad was moved to a different location at the landing facility to support the next phase of flight testing. Project Morpheus tests NASA’s automated landing and hazard avoidance technology, or ALHAT, and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. In the foreground of the photo is the ALHAT field. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA/Ben Smegelsky
2014-01-21
CAPE CANAVERAL, Fla. – The Project Morpheus prototype lander is being lifted by crane for positioning on a launch pad at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. The prototype lander is being prepared for its fourth free flight test at Kennedy. Morpheus will launch from the ground over a flame trench and then descend and land on a dedicated pad inside the autonomous landing and hazard avoidance technology, or ALHAT, hazard field. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://www.nasa.gov/centers/johnson/exploration/morpheus. Photo credit: NASA/Cory Huston
Koontz, Tomas M; Sen, Sucharita
2013-03-01
When central governments decentralize natural resource management (NRM), they often retain an interest in the local efforts and provide funding for them. Such outside investments can serve an important role in moving community-based efforts forward. At the same time, they can represent risks to the community if government resources are not stable over time. Our focus in this article is on the effects of withdrawal of government resources from community-based NRM. A critical question is how to build institutional capacity to carry on when the government funding runs out. This study compares institutional survival and coping strategies used by community-based project organizations in two different contexts, India and the United States. Despite higher links to livelihoods, community participation, and private benefits, efforts in the Indian cases exhibited lower survival rates than did those in the U.S. cases. Successful coping strategies in the U.S. context often involved tapping into existing institutions and resources. In the Indian context, successful coping strategies often involved building broad community support for the projects and creatively finding additional funding sources. On the other hand, the lack of local community interest, due to the top-down development approach and sometimes narrow benefit distribution, often challenged organizational survival and project maintenance.
NASA Astrophysics Data System (ADS)
Koontz, Tomas M.; Sen, Sucharita
2013-03-01
When central governments decentralize natural resource management (NRM), they often retain an interest in the local efforts and provide funding for them. Such outside investments can serve an important role in moving community-based efforts forward. At the same time, they can represent risks to the community if government resources are not stable over time. Our focus in this article is on the effects of withdrawal of government resources from community-based NRM. A critical question is how to build institutional capacity to carry on when the government funding runs out. This study compares institutional survival and coping strategies used by community-based project organizations in two different contexts, India and the United States. Despite higher links to livelihoods, community participation, and private benefits, efforts in the Indian cases exhibited lower survival rates than did those in the U.S. cases. Successful coping strategies in the U.S. context often involved tapping into existing institutions and resources. In the Indian context, successful coping strategies often involved building broad community support for the projects and creatively finding additional funding sources. On the other hand, the lack of local community interest, due to the top-down development approach and sometimes narrow benefit distribution, often challenged organizational survival and project maintenance.
Regional Climate Sensitivity- and Historical-Based Projections to 2100
NASA Astrophysics Data System (ADS)
Hébert, Raphaël.; Lovejoy, Shaun
2018-05-01
Reliable climate projections at the regional scale are needed in order to evaluate climate change impacts and inform policy. We develop an alternative method for projections based on the transient climate sensitivity (TCS), which relies on a linear relationship between the forced temperature response and the strongly increasing anthropogenic forcing. The TCS is evaluated at the regional scale (5° by 5°), and projections are made accordingly to 2100 using the high and low Representative Concentration Pathways emission scenarios. We find that there are large spatial discrepancies between the regional TCS from 5 historical data sets and 32 global climate model (GCM) historical runs and furthermore that the global mean GCM TCS is about 15% too high. Given that the GCM Representative Concentration Pathway scenario runs are mostly linear with respect to their (inadequate) TCS, we conclude that historical methods of regional projection are better suited given that they are directly calibrated on the real world (historical) climate.
USDA-ARS?s Scientific Manuscript database
The TIR-NB-LRR gene, Resistance to Uncinula necator 1 (RUN1), from Vitis rotundifolia was recently identified and confirmed to confer resistance to the grapevine powdery mildew fungus Erysiphe necator (syn. U. necator) in transgenic Vitis vinifera cultivars. However, powdery mildew cleistothecia ha...
Implications of random variation in the Stand Prognosis Model
David A. Hamilton
1991-01-01
Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...
30 CFR 203.75 - What risk do I run if I request a redetermination?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false What risk do I run if I request a redetermination? 203.75 Section 203.75 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR... run if I request a redetermination? If you request a redetermination after we have granted you a...
Integration of PanDA workload management system with Titan supercomputer at OLCF
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Data Management and Archiving - a Long Process
NASA Astrophysics Data System (ADS)
Gebauer, Petra; Bertelmann, Roland; Hasler, Tim; Kirchner, Ingo; Klump, Jens; Mettig, Nora; Peters-Kottig, Wolfgang; Rusch, Beate; Ulbricht, Damian
2014-05-01
Implementing policies for research data management to the end of data archiving at university institutions takes a long time. Even though, especially in geosciences, most of the scientists are familiar to analyze different sorts of data, to present statistical results and to write publications sometimes based on big data records, only some of them manage their data in a standardized manner. Much more often they have learned how to measure and to generate large volumes of data than to document these measurements and to preserve them for the future. Changing staff and limited funding make this work more difficult, but it is essential in a progressively developing digital and networked world. Results from the project EWIG (Translates to: Developing workflow components for long-term archiving of research data in geosciences), funded by Deutsche Forschungsgemeinschaft, will help on these theme. Together with the project partners Deutsches GeoForschungsZentrum Potsdam and Konrad-Zuse-Zentrum für Informationstechnik Berlin a workflow to transfer continuously recorded data from a meteorological city monitoring network into a long-term archive was developed. This workflow includes quality assurance of the data as well as description of metadata and using tools to prepare data packages for long term archiving. It will be an exemplary model for other institutions working with similar data. The development of this workflow is closely intertwined with the educational curriculum at the Institut für Meteorologie. Designing modules to run quality checks for meteorological time series of data measured every minute and preparing metadata are tasks in actual bachelor theses. Students will also test the usability of the generated working environment. Based on these experiences a practical guideline for integrating research data management in curricula will be one of the results of this project, for postgraduates as well as for younger students. Especially at the beginning of the scientific career it is necessary to become familiar with all issues concerning data management. The outcomes of EWIG are intended to be generic enough to be easily adopted by other institutions. University lectures in meteorology were started to teach future scientific generations right from the start how to deal with all sorts of different data in a transparent way. The progress of the project EWIG can be followed on the web via ewig.gfz-potsdam.de
Running R Statistical Computing Environment Software on the Peregrine
for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing
NASA Astrophysics Data System (ADS)
Hayden-Lesmeister, A.; Remo, J. W.; Piazza, B.
2017-12-01
The Atchafalaya River (AR) in Louisiana is the principal distributary of the Mississippi River. Reach to system scale modifications on the AR and throughout its basin for regional flood mitigation, navigation, and hydrocarbon extraction have substantially altered the hydrologic connectivity between the river and its floodplain wetlands, threatening the ecological integrity of this globally-important ecosystem. Stakeholder groups agree that restoring flow connectivity is essential to maintaining the basin's water quality, and recent management efforts have focused on the 174 km2 Flat Lake Water Management Unit (WMU). Several flow-connectivity enhancement projects have been proposed by the Atchafalaya Basin Program's Technical Advisory Group, but none have been constructed. We collaborated with The Nature Conservancy and other agencies to obtain existing datasets and develop a 1D2D hydraulic model to examine whether proposed restoration projects improved lateral surface-water connectivity in the Flat Lake WMU. To do this, we employed a range of physical parameters (inundation extent, water depths, and rates of WSEL reduction) as potential indicators of improved connectivity with restoration. We ran simulations to examine two scenarios - a baseline scenario (S1) to examine current conditions (no restoration projects), and a full-implementation scenario (S2), where all restoration projects that could be examined at the model resolution were implemented. Potential indicators of improved lateral connectivity indicated that proposed projects may play an important role in improving water quality in the Flat Lake WMU. At the end of the constant-discharge portion of the run, average depths between S1 and S2 remained unchanged; however, depths and water levels were consistently lower for S2 during a drawdown. Volumetrically, up to 4.4 million m3 less water was in the Flat Lake system when projects were implemented. The results indicate that projects introduce nutrient-rich river water and improve flushing flows through backswamp areas. Our modeling approach may provide a cost-effective framework for examining the performance of proposed restoration projects along other highly-altered, low-gradient river systems.
Future Climate Impacts on Crop Water Demand and Groundwater Longevity in Agricultural Regions
NASA Astrophysics Data System (ADS)
Russo, T. A.; Sahoo, S.; Elliott, J. W.; Foster, I.
2016-12-01
Improving groundwater management practices under future drought conditions in agricultural regions requires three steps: 1) estimating the impacts of climate and drought on crop water demand, 2) projecting groundwater availability given climate and demand forcing, and 3) using this information to develop climate-smart policy and water use practices. We present an innovative combination of models to address the first two steps, and inform the third. Crop water demand was simulated using biophysical crop models forced by multiple climate models and climate scenarios, with one case simulating climate adaptation (e.g. modify planting or harvest time) and another without adaptation. These scenarios were intended to represent a range of drought projections and farm management responses. Nexty, we used projected climate conditions and simulated water demand across the United States as inputs to a novel machine learning-based groundwater model. The model was applied to major agricultural regions relying on the High Plains and Mississippi Alluvial aquifer systems in the US. The groundwater model integrates input data preprocessed using single spectrum analysis, mutual information, and a genetic algorithm, with an artificial neural network model. Model calibration and test results indicate low errors over the 33 year model run, and strong correlations to groundwater levels in hundreds of wells across each aquifer. Model results include a range of projected groundwater level changes from the present to 2050, and in some regions, identification and timeframe of aquifer depletion. These results quantify aquifer longevity under climate and crop scenarios, and provide decision makers with the data needed to compare scenarios of crop water demand, crop yield, and groundwater response, as they aim to balance water sustainability with food security.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olive, S.W.; Lamb, B.L.
This paper is an account of the process that evolved during acquisition of the license to operate the Terror Lake hydroelectric power project under the auspices of the Federal Energy Regulatory Commission (FERC). The Commission is responsible for granting these licenses under the Federal Power Act (16 U.S.C. 792 et seq.). This act provides, in part, that FERC may condition a license to protect the public interest. The public interest in these cases has come to include both instream and terrestrial values. The Terror River is located on Kodiak Island in Alaska. The river is within the Kodiak National Wildlifemore » Refuge; it supports excellent runs of several species of Pacific Salmon which are both commercially important and a prime source of nutrition for the Kodiak brown bear. The river is also a prime resource for generating electric power. One major concern in the negotiations was the impact of land disturbance and management practices on brown bear habitat - i.e., protection of the brown bear. Maintenance of the bears' habitat is the main purpose of the Kodiak National Wildlife Refuge. But, like many other projects, resolving the instream flow issue was of major importance in the issuance of the FERC license. This paper discusses the fish and wildlife questions, but concentrates on instream uses and how protection of these uses was decided. With this as a focus, the paper explains the FERC process, gives a history of the Terror Lake Project, and, ultimately, makes recommendations for improved management of controversies within the context of the FERC licensing procedures. 65 references.« less
NASA Astrophysics Data System (ADS)
Hain, C.; Mecikalski, J. R.; Schultz, L. A.
2009-12-01
The Atmosphere-Land Exchange Inverse (ALEXI) model was developed as an auxiliary means for estimating surface fluxes over large regions primarily using remote-sensing data. The model is unique in that no information regarding antecedent precipitation or moisture storage capacity is required - the surface moisture status is deduced from a radiometric temperature change signal. ALEXI uses the available water fraction (fAW) as a proxy for soil moisture conditions. Combining fAW with ALEXI’s ability to provide valuable information about the partitioning of the surface energy budget, which can dictated largely by soil moisture conditions, accommodates the retrieval of an average fAW from the surface to the rooting depth of the active vegetation. Using this approach has many advantages over traditional energy flux and soil moisture measurements (towers with limited range and large monetary/personnel costs) or approximation methods (parametrization of the relationship between available water and soil moisture) in that data is available both spatially and temporal over a large, non-homogeneous, sometimes densely vegetated area. Being satellite based, the model can be run anywhere thermal infrared satellite information is available. The current ALEXI climatology dates back to March 2000 and covers the continental U.S. Examples of projects underway using the ALEXI soil moisture retrieval tools include the Southern Florida Water Management Project; NASA’s Project Nile, which proposes to acquire hydrological information for the water management in the Nile River basin; and a USDA pro ject to expand the ALEXI framework to include Europe and parts of northern Africa using data from the European geostationary satellites, specifically the Meteosat Second Generation (MSG) Series.
THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS
Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel
2010-01-01
Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-13
... applicant proposes to install two crossflow turbines at the project rather than two double suction pump turbines as previously envisioned. The applicant also proposes to modify project operation by running the... proposed amendment would not change the project's hydraulic capacity or possible electrical output but...
ERIC Educational Resources Information Center
Dalton, William Edward
Described is a project designed to make government lessons and economics more appealing to sixth-grade students by having them set up and run a model city. General preparation procedures and set-up of the project, specific lesson plans, additional activities, and project evaluation are examined. An actual 3-dimensional model city was set up on…
Good Vibrations: Positive Change through Social Music-Making
ERIC Educational Resources Information Center
Henley, Jennie; Caulfield, Laura S.; Wilson, David; Wilkinson, Dean J.
2012-01-01
Good Vibrations is a charity that runs gamelan projects with offenders in prison and on probation. A recent Birmingham City University study investigating the short-, medium- and long-term impact of the project found that participation in a Good Vibrations project acted as a catalyst for positive change. The research found that not only did…
ABA-Cloud: support for collaborative breath research
Elsayed, Ibrahim; Ludescher, Thomas; King, Julian; Ager, Clemens; Trosin, Michael; Senocak, Uygar; Brezany, Peter; Feilhauer, Thomas; Amann, Anton
2016-01-01
This paper introduces the advanced breath analysis (ABA) platform, an innovative scientific research platform for the entire breath research domain. Within the ABA project, we are investigating novel data management concepts and semantic web technologies to document breath analysis studies for the long run as well as to enable their full automatic reproducibility. We propose several concept taxonomies (a hierarchical order of terms from a glossary of terms), which can be seen as a first step toward the definition of conceptualized terms commonly used by the international community of breath researchers. They build the basis for the development of an ontology (a concept from computer science used for communication between machines and/or humans and representation and reuse of knowledge) dedicated to breath research. PMID:23619467
ABA-Cloud: support for collaborative breath research.
Elsayed, Ibrahim; Ludescher, Thomas; King, Julian; Ager, Clemens; Trosin, Michael; Senocak, Uygar; Brezany, Peter; Feilhauer, Thomas; Amann, Anton
2013-06-01
This paper introduces the advanced breath analysis (ABA) platform, an innovative scientific research platform for the entire breath research domain. Within the ABA project, we are investigating novel data management concepts and semantic web technologies to document breath analysis studies for the long run as well as to enable their full automatic reproducibility. We propose several concept taxonomies (a hierarchical order of terms from a glossary of terms), which can be seen as a first step toward the definition of conceptualized terms commonly used by the international community of breath researchers. They build the basis for the development of an ontology (a concept from computer science used for communication between machines and/or humans and representation and reuse of knowledge) dedicated to breath research.
The Data Acquisition System of the Stockholm Educational Air Shower Array
NASA Astrophysics Data System (ADS)
Hofverberg, P.; Johansson, H.; Pearce, M.; Rydstrom, S.; Wikstrom, C.
2005-12-01
The Stockholm Educational Air Shower Array (SEASA) project is deploying an array of plastic scintillator detector stations on school roofs in the Stockholm area. Signals from GPS satellites are used to time synchronise signals from the widely separated detector stations, allowing cosmic ray air showers to be identified and studied. A low-cost and highly scalable data acquisition system has been produced using embedded Linux processors which communicate station data to a central server running a MySQL database. Air shower data can be visualised in real-time using a Java-applet client. It is also possible to query the database and manage detector stations from the client. In this paper, the design and performance of the system are described
Identification and evaluation of software measures
NASA Technical Reports Server (NTRS)
Card, D. N.
1981-01-01
A large scale, systematic procedure for identifying and evaluating measures that meaningfully characterize one or more elements of software development is described. The background of this research, the nature of the data involved, and the steps of the analytic procedure are discussed. An example of the application of this procedure to data from real software development projects is presented. As the term is used here, a measure is a count or numerical rating of the occurrence of some property. Examples of measures include lines of code, number of computer runs, person hours expended, and degree of use of top down design methodology. Measures appeal to the researcher and the manager as a potential means of defining, explaining, and predicting software development qualities, especially productivity and reliability.
Van Wynsberge, Simon; Andréfouët, Serge; Gaertner-Mazouni, Nabila; Remoissenet, Georges
2018-02-01
Despite actions to manage sustainably tropical Pacific Ocean reef fisheries, managers have faced failures and frustrations because of unpredicted mass mortality events triggered by climate variability. The consequences of these events on the long-term population dynamics of living resources need to be better understood for better management decisions. Here, we use a giant clam (Tridacna maxima) spatially explicit population model to compare the efficiency of several management strategies under various scenarios of natural mortality, including mass mortality due to climatic anomalies. The model was parameterized by in situ estimations of growth and mortality and fishing effort, and was validated by historical and new in situ surveys of giant clam stocks in two French Polynesia lagoons. Projections on the long run (100 years) suggested that the best management strategy was a decrease of fishing pressure through quota implementation, regardless of the mortality regime considered. In contrast, increasing the minimum legal size of catch and closing areas to fishing were less efficient. When high mortality occurred due to climate variability, the efficiency of all management scenarios decreased markedly. Simulating El Niño Southern Oscillation event by adding temporal autocorrelation in natural mortality rates increased the natural variability of stocks, and also decreased the efficiency of management. These results highlight the difficulties that managers in small Pacific islands can expect in the future in the face of global warming, climate anomalies and new mass mortalities. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA SPoRT Modeling and Data Assimilation Research and Transition Activities Using WRF, LIS and GSI
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Blankenship, Clay B.; Zavodsky, Bradley T.; Srikishen, Jayanthi; Berndt, Emily B.
2014-01-01
weather research and forecasting ===== The NASA Short-term Prediction Research and Transition (SPoRT) program has numerous modeling and data assimilation (DA) activities in which the WRF model is a key component. SPoRT generates realtime, research satellite products from the MODIS and VIIRS instruments, making the data available to NOAA/NWS partners running the WRF/EMS, including: (1) 2-km northwestern-hemispheric SST composite, (2) daily, MODIS green vegetation fraction (GVF) over CONUS, and (3) NASA Land Information System (LIS) runs of the Noah LSM over the southeastern CONUS. Each of these datasets have been utilized by specific SPoRT partners in local EMS model runs, with select offices evaluating the impacts using a set of automated scripts developed by SPoRT that manage data acquisition and run the NCAR Model Evaluation Tools verification package. SPoRT is engaged in DA research with the Gridpoint Statistical Interpolation (GSI) and Ensemble Kalman Filter in LIS for soil moisture DA. Ongoing DA projects using GSI include comparing the impacts of assimilating Atmospheric Infrared Sounder (AIRS) radiances versus retrieved profiles, and an analysis of extra-tropical cyclones with intense non-convective winds. As part of its Early Adopter activities for the NASA Soil Moisture Active Passive (SMAP) mission, SPoRT is conducting bias correction and soil moisture DA within LIS to improve simulations using the NASA Unified-WRF (NU-WRF) for both the European Space Agency's Soil Moisture Ocean Salinity and upcoming SMAP mission data. SPoRT has also incorporated real-time global GVF data into LIS and WRF from the VIIRS product being developed by NOAA/NESDIS. This poster will highlight the research and transition activities SPoRT conducts using WRF, NU-WRF, EMS, LIS, and GSI.
Growth and Yield Estimation for Loblolly Pine in the West Gulf
Paul A. Murphy; Herbert S. Sternitzke
1979-01-01
An equation system is developed to estimate current yield, projected basal area, and projected volume for merchantable natural stands on a per-acre basis. These estimates indicate yields that can be expected from woods-run conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eggeman, Tim; O'Neill, Brian
2016-08-17
ZeaChem Inc. and US DOE successfully demonstrated the ZeaChem process for producing sugars and ethanol from high-impact biomass feedstocks. The project was executed over a 5-year period under a $31.25 million cooperative agreement (80:20 Federal:ZeaChem cost share). The project was managed by dividing it into three budget periods. Activities during Budget Period 1 were limited to planning, permitting, and other pre-construction planning. Budget Period 2 activities included engineering, procurement, construction, commissioning, start-up and initial operations through the Independent Engineer Test Runs. The scope of construction was limited to the Chem Frac and Hydrogenolysis units, as the Core Facility was alreadymore » in place. Construction was complete in December 2012, and the first cellulosic ethanol was produced in February 2013. Additional operational test runs were conducted during Budget Period 3 (completed June 2015) using hybrid poplar, corn stover, and wheat straw feedstocks, resulting in the production of cellulosic ethanol and various other biorefinery intermediates. The research adds to the understanding of the Chem Frac and Hydrogenolysis technologies in that the technical performance of each unit was measured, and the resulting data and operational experience can be used as the basis for engineering designs, thus mitigating risks for deployment in future commercial facilities. The Chem Frac unit was initially designed to be operated as two-stage dilute acid hydrolysis, with first stage conditions selected to remove the hemicellulose fraction of the feedstock, and the second stage conditions selected to remove the cellulose fraction. While the Chem Frac unit met or exceeded the design capacity of 10 ton(dry)/day, the technical effectiveness of the Chem Frac unit was below expectations in its initial two-stage dilute acid configuration. The sugars yields were low, the sugars were dilute, and the sugars had poor fermentability caused by excessive inhibitors from wood breakdown products, resulting in a non-viable process from an economic point of view. Later runs with the Chem Frac unit switched to a configuration that used dilute acid pretreatment followed by enzymatic hydrolysis. This change improved yield, increased sugar concentrations, and improved fermentability of sugars. The Hydrogenolysis unit met or exceeded all expectations with respect to unit capacity, technical performance, and economic performance. The US DOE funds for the project were provided through the American Recovery and Reinvestment Act of 2009. In addition to the scientific/technical merit of the project, this project benefited the public through the creation of approximately 75 onsite direct construction-related jobs, 25 direct on-going operations-related jobs, plus numerous indirect jobs, and thus was well aligned with the goals of the American Recovery and Reinvestment Act of 2009.« less
EFFECTS OF FOREFOOT RUNNING ON CHRONIC EXERTIONAL COMPARTMENT SYNDROME: A CASE SERIES
Gregory, Robert; Alitz, Curtis; Gerber, J. Parry
2011-01-01
Introduction: Chronic exertional compartment syndrome (CECS) is a condition that occurs almost exclusively with running whereby exercise increases intramuscular pressure compromising circulation, prohibiting muscular function, and causing pain in the lower leg. Currently, a lack of evidence exists for the effective conservative management of CECS. Altering running mechanics by adopting forefoot running as opposed to heel striking may assist in the treatment of CECS, specifically with anterior compartment symptoms. Case Description: The purpose of this case series is to describe the outcomes for subjects with CECS through a systematic conservative treatment model focused on forefoot running. Subject one was a 21 y/o female with a 4 year history of CECS and subject two was a 21 y/o male, 7 months status-post two-compartment right leg fasciotomy with a return of symptoms and a new onset of symptoms on the contralateral side. Outcome: Both subjects modified their running technique over a period of six weeks. Kinematic and kinetic analysis revealed increased step rate while step length, impulse, and peak vertical ground reaction forces decreased. In addition, leg intracompartmental pressures decreased from pre-training to post-training. Within 6 weeks of intervention subjects increased their running distance and speed absent of symptoms of CECS. Follow-up questionnaires were completed by the subjects at 7 months following intervention; subject one reported running distances up to 12.87 km pain-free and subject two reported running 6.44 km pain-free consistently 3 times a week. Discussion: This case series describes a potentially beneficial conservative management approach to CECS in the form of forefoot running instruction. Further research in this area is warranted to further explore the benefits of adopting a forefoot running technique for CECS as well as other musculoskeletal overuse complaints. PMID:22163093
A new approach to process control using Instability Index
NASA Astrophysics Data System (ADS)
Weintraub, Jeffrey; Warrick, Scott
2016-03-01
The merits of a robust Statistical Process Control (SPC) methodology have long been established. In response to the numerous SPC rule combinations, processes, and the high cost of containment, the Instability Index (ISTAB) is presented as a tool for managing these complexities. ISTAB focuses limited resources on key issues and provides a window into the stability of manufacturing operations. ISTAB takes advantage of the statistical nature of processes by comparing the observed average run length (OARL) to the expected run length (ARL), resulting in a gap value called the ISTAB index. The ISTAB index has three characteristic behaviors that are indicative of defects in an SPC instance. Case 1: The observed average run length is excessively long relative to expectation. ISTAB > 0 is indicating the possibility that the limits are too wide. Case 2: The observed average run length is consistent with expectation. ISTAB near zero is indicating that the process is stable. Case 3: The observed average run length is inordinately short relative to expectation. ISTAB < 0 is indicating that the limits are too tight, the process is unstable or both. The probability distribution of run length is the basis for establishing an ARL. We demonstrate that the geometric distribution is a good approximation to run length across a wide variety of rule sets. Excessively long run lengths are associated with one kind of defect in an SPC instance; inordinately short run lengths are associated with another. A sampling distribution is introduced as a way to quantify excessively long and inordinately short observed run lengths. This paper provides detailed guidance for action limits on these run lengths. ISTAB as a statistical method of review facilitates automated instability detection. This paper proposes a management system based on ISTAB as an enhancement to more traditional SPC approaches.
Effects of forefoot running on chronic exertional compartment syndrome: a case series.
Diebal, Angela R; Gregory, Robert; Alitz, Curtis; Gerber, J Parry
2011-12-01
Chronic exertional compartment syndrome (CECS) is a condition that occurs almost exclusively with running whereby exercise increases intramuscular pressure compromising circulation, prohibiting muscular function, and causing pain in the lower leg. Currently, a lack of evidence exists for the effective conservative management of CECS. Altering running mechanics by adopting forefoot running as opposed to heel striking may assist in the treatment of CECS, specifically with anterior compartment symptoms. The purpose of this case series is to describe the outcomes for subjects with CECS through a systematic conservative treatment model focused on forefoot running. Subject one was a 21 y/o female with a 4 year history of CECS and subject two was a 21 y/o male, 7 months status-post two-compartment right leg fasciotomy with a return of symptoms and a new onset of symptoms on the contralateral side. Both subjects modified their running technique over a period of six weeks. Kinematic and kinetic analysis revealed increased step rate while step length, impulse, and peak vertical ground reaction forces decreased. In addition, leg intracompartmental pressures decreased from pre-training to post-training. Within 6 weeks of intervention subjects increased their running distance and speed absent of symptoms of CECS. Follow-up questionnaires were completed by the subjects at 7 months following intervention; subject one reported running distances up to 12.87 km pain-free and subject two reported running 6.44 km pain-free consistently 3 times a week. This case series describes a potentially beneficial conservative management approach to CECS in the form of forefoot running instruction. Further research in this area is warranted to further explore the benefits of adopting a forefoot running technique for CECS as well as other musculoskeletal overuse complaints.
Characteristics of process oils from HTI coal/plastics co-liquefaction runs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robbins, G.A.; Brandes, S.D.; Winschel, R.A.
1995-12-31
The objective of this project is to provide timely analytical support to DOE`s liquefaction development effort. Specific objectives of the work reported here are presented. During a few operating periods of Run POC-2, HTI co-liquefied mixed plastics with coal, and tire rubber with coal. Although steady-state operation was not achieved during these brief tests periods, the results indicated that a liquefaction plant could operate with these waste materials as feedstocks. CONSOL analyzed 65 process stream samples from coal-only and coal/waste portions of the run. Some results obtained from characterization of samples from Run POC-2 coal/plastics operation are presented.
NASA Astrophysics Data System (ADS)
Ludwig, Ralf
2010-05-01
Adapting to the impacts of climate change is certainly one of the major challenges in water resources management over the next decades. Adaptation to climate change risks is most crucial in this domain, since projected increase in mean air temperature in combination with an expected increase in the temporal variability of precipitation patterns will contribute to pressure on current water availability, allocation and management practices. The latter often involve the utilization of valuable infrastructure, such as dams, reservoirs and water intakes, for which adaptation options must by developed over long-term and often dynamic planning horizons. Research to establish novel methodologies for improved adaptation to climate change is thus very important and only beginning to emerge in regional watershed management. The presented project Q-BIC³, funded by the Bavarian Minstry for the Environment and the Québec Ministère du Développement économique, de l'Innovation et de l'Exportation, aims to develop and apply a newly designed spectrum of tools to support the improved assessment of adaptation options to climate change in regional watershed management. It addresses in particular selected study sites in Québec and Bavaria. The following key issues have been prioritized within Q-BIC³: i) The definition of potential adaptation options in the context of climate change for pre-targeted water management key issues using a subsequent and logical chain of modelling tools (climate, hydrological and water management modeling tools) ii) The definition of an approach that accounts for hydrological projection uncertainties in the search for potential adaptation options in the context of climate change iii) The investigation of the required complexity in hydrological models to estimate climate change impacts and to develop specific adaptation options for Québec and Bavaria watersheds. iv) The development and prototyping of a regionally transferable and modular modelling system for integrated watershed management under climate change conditions. The study sites under investigation, namely the Haut-Saint Francois and Gatineau watersheds in Québec and the Isar and Regnitz catchments in Bavaria, are under heavy anthropogenic use. Intense dam and reservoir operations and even water transfer systems are in place to satisfy multi-purpose demands on available water resources. These are imposing extreme modifications to the natural flow regimes. In the first phase of the project, climatic forcing, stemming from an ensemble of selected GCM and RCM runs, is applied to a variety of hydrological models with different complexity. The derived projections of future hydrological conditions serve to investigate, whether current operation rules and/or existing infrastructure needs to be adapted to a changing environment. First findings demonstrate the large uncertainties associated to the model chain outputs, but also indicate that related adaptation is indispensable to meet the challenges of the rapidly changing man-environment systems.
Volunteer Computing Experience with ATLAS@Home
NASA Astrophysics Data System (ADS)
Adam-Bourdarios, C.; Bianchi, R.; Cameron, D.; Filipčič, A.; Isacchini, G.; Lançon, E.; Wu, W.;
2017-10-01
ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers’ resources make up a sizeable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one task to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.
2011-01-01
Background Project management is widely used to deliver projects on time, within budget and of defined quality. However, there is little published information describing its use in managing health and medical research projects. We used project management in the Alcohol and Pregnancy Project (2006-2008) http://www.ichr.uwa.edu.au/alcoholandpregnancy and in this paper report researchers' opinions on project management and whether it made a difference to the project. Methods A national interdisciplinary group of 20 researchers, one of whom was the project manager, formed the Steering Committee for the project. We used project management to ensure project outputs and outcomes were achieved and all aspects of the project were planned, implemented, monitored and controlled. Sixteen of the researchers were asked to complete a self administered questionnaire for a post-project review. Results The project was delivered according to the project protocol within the allocated budget and time frame. Fifteen researchers (93.8%) completed a questionnaire. They reported that project management increased the effectiveness of the project, communication, teamwork, and application of the interdisciplinary group of researchers' expertise. They would recommend this type of project management for future projects. Conclusions Our post-project review showed that researchers comprehensively endorsed project management in the Alcohol and Pregnancy Project and agreed that project management had contributed substantially to the research. In future, we will project manage new projects and conduct post-project reviews. The results will be used to encourage continuous learning and continuous improvement of project management, and provide greater transparency and accountability of health and medical research. The use of project management can benefit both management and scientific outcomes of health and medical research projects. PMID:21635721
Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*
Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...
2014-02-24
The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less
Manan, A; Ibrahim, M
2003-01-01
In this paper we explain the current condition of the Bau-Bau River, examine community participation for management of the river system, and consider options for improving the institutional capacity for a community-based approach. This assessment is based on a research project with the following objectives: (1) analyse the biophysical and socio-economic condition of the river as a basis for future planning; (2) identify current activities which contribute waste or pollution to the river; (3) assess the status and level of pollution in the river; (4) analyse community participation related to all stages of river management; and (5) identify future river management needs and opportunities. Due to the increasing population in Bau-Bau city, considerable new land is required for housing, roads, agriculture, social facilities, etc. Development in the city and elsewhere has increased run-off and erosion, as well as sedimentation in the river. In addition, household activities are generating more solid and domestic waste that causes organic pollution in the river. The research results show that the water quality in the upper river system is still good, whilst the quality of water in the vicinity of Bau-Bau city, from the mid-point of the watershed to the estuary, is not good, being contaminated with heavy metals (Cd and Pb) and organic pollutants. However, the levels of those pollutants are still below regulatory standards. The main reasons for pollution in the river are mainly lack of management for both liquid and solid wastes, as well as lack of community participation in river management. The government of Bau-Bau city and the community are developing a participatory approach for planning to restore and conserve the Bau-Bau River as well as the entire catchment. The activities of this project are: (1) forming institutional arrangements to support river conservation; (2) implementing extension initiatives to empower the community; (3) identifying a specific location to establish an urban forest; (4) implementing demonstration projects for liquid system management; (5) promoting coordination amongst the different organisations and agencies in the catchment; (6) improving domestic waste transportation; and (7) recycling waste to create compost material to become an income source for the community.
An Approach for Implementation of Project Management Information Systems
NASA Astrophysics Data System (ADS)
Běrziša, Solvita; Grabis, Jānis
Project management is governed by project management methodologies, standards, and other regulatory requirements. This chapter proposes an approach for implementing and configuring project management information systems according to requirements defined by these methodologies. The approach uses a project management specification framework to describe project management methodologies in a standardized manner. This specification is used to automatically configure the project management information system by applying appropriate transformation mechanisms. Development of the standardized framework is based on analysis of typical project management concepts and process and existing XML-based representations of project management. A demonstration example of project management information system's configuration is provided.
A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles
NASA Astrophysics Data System (ADS)
Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.
2016-12-01
Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.
How well do the GCMs replicate the historical precipitation variability in the Colorado River Basin?
NASA Astrophysics Data System (ADS)
Guentchev, G.; Barsugli, J. J.; Eischeid, J.; Raff, D. A.; Brekke, L.
2009-12-01
Observed precipitation variability measures are compared to measures obtained using the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project (CMIP3) General Circulation Models (GCM) data from 36 model projections downscaled by Brekke at al. (2007) and 30 model projections downscaled by Jon Eischeid. Three groups of variability measures are considered in this historical period (1951-1999) comparison: a) basic variability measures, such as standard deviation, interdecadal standard deviation; b) exceedance probability values, i.e., 10% (extreme wet years) and 90% (extreme dry years) exceedance probability values of series of n-year running mean annual amounts, where n=1,12; 10% exceedance probability values of annual maximum monthly precipitation (extreme wet months); and c) runs variability measures, e.g., frequency of negative and positive runs of annual precipitation amounts, total number of the negative and positive runs. Two gridded precipitation data sets produced from observations are used: the Maurer et al. (2002) and the Daly et al. (1994) Precipitation Regression on Independent Slopes Method (PRISM) data sets. The data consist of monthly grid-point precipitation averaged on a United States Geological Survey (USGS) hydrological sub-region scale. The statistical significance of the obtained model minus observed measure differences is assessed using a block bootstrapping approach. The analyses were performed on annual, seasonal and monthly scale. The results indicate that the interdecadal standard deviation is underestimated, in general, on all time scales by the downscaled model data. The differences are statistically significant at a 0.05 significance level for several Lower Colorado Basin sub-regions on annual and seasonal scale, and for several sub-regions located mostly in the Upper Colorado River Basin for the months of March, June, July and November. Although the models simulate drier extreme wet years, wetter extreme dry years and drier extreme wet months for the Upper Colorado basin, the differences are mostly not-significant. Exceptions are the results about the extreme wet years for n=3 for sub-region White-Yampa, for n=6, 7, and 8 for sub-region Upper Colorado-Dolores, and about the extreme dry years for n=11 for sub-region Great Divide-Upper Green. None of the results for the sub-regions in the Lower Colorado Basin were significant. For most of the Upper Colorado sub-regions the models simulate significantly lower frequency of negative and positive 4-6 year runs, while for several sub-regions a significantly higher frequency of 2-year negative runs is evident in the model versus the Maurer data comparisons. The model projections versus the PRISM data comparison reveals similar results for the negative runs, while for the positive runs the results indicate that the models simulate higher frequency of the 2-6 year runs. The results for the Lower Colorado basin sub-regions are similar, in general, to these for the Upper Colorado sub-regions. The differences between the simulated and the observed total number of negative or positive runs were not significant for almost all of the sub-regions within the Colorado River Basin.
Great Plains Project: at worst a $1. 7 billion squeeze
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maize, K.
1983-04-11
On January 29, 1982, seeking a loan guarantee for its coal-to-gas synfuels project, Great Plains Gasification Associates told the Department of Energy that they expected to reap $1.2 billion in net income to the partnership during the first 10 years of the venture. On March 31, 1983, Great Plains treasurer Rodney Boulanger had a different projection: a horrific loss of $773 million in the first decade. The Great Plains project, with construction 50% complete, is being built near Beulah, ND. The project has a design capacity of 137.5 million cubic feet a day of SNG. Great Plains' analysis assumes thatmore » the plant will operate at 70% of design capacity in 1985, 77% in 1986, 84% in 1987 and 91% thereafter. The company projects the total project cost at $2.1 billion, consisting of plant costs of $1.9 billion and coal mine costs of $156 million. In originally projecting a cumulative net income of better than $1 billion, the partners anticipated running losses in only three of the first 10 years, and cash distributions from the project of $893 million during the first decade. Under the new projections, even in the best case, the first four years would show losses and there would be no distribution to the partners. In the worst case, the project would run in the red every year for the first 10 years.« less
Margrove, K L; Pope, J; Mark, G M
2013-12-01
This study addresses the views and experiences of artists who run participatory arts and health courses for those with mental health or social problems. Qualitative research with 11 artists from three different organizations providing participatory arts and health courses. Semi-structured in-depth interviews were conducted. Participants provided oral contributions that were transcribed and then thematically analysed by the authors. Participants described perceived positive benefits of participatory arts and health courses, including developing friendships, self-expression and creativity, a non-judgmental environment, along with key issues arising, including managing challenging behaviours and provision of follow-on options. Results indicate that improvements in well-being can be identified by artists during courses, the activity can help develop friendships, courses can be well managed in community settings, and benefits of follow-on activities should be investigated in future. Copyright © 2013 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
The Project Manager Who Saved His Country
NASA Technical Reports Server (NTRS)
Baniszewski, John
2008-01-01
George Meade defeated Robert E. Lee, one of the greatest military leaders of all time. How did he do it? By using the skills he had learned as a project manager and outperforming Lee in all aspects of project management. Most project managers are familiar with the Project Management Institute's "Guide to the Project Management Body of Knowledge" (PMBOK), which identifies the skills and knowledge crucial to successful project management. Project managers need to make sure that all the elements of a project work together. They must develop and execute plans and coordinate changes to those plans. A project manager must define the scope of the work, break it into manageable pieces, verify and control what work is being done, and make sure that the work being done is essential to the project. Every project manager knows the challenges of schedule and the value of schedule slack. Project managers must get the resources they need and use them effectively. Project managers get the people they need and use their talents to achieve mission success. Projects generate huge amounts of information. A key to project success is getting sufficient and accurate information to the people who need it when they need it. Project managers must identify and quantify the risks that jeopardize project success and make plans for dealing with them. Studying Meade and Lee's performances at Gettysburg can help modern project managers appreciate, develop, and use the skills they need to be good project managers. The circumstances may be different, but the basic principles are the same. This dramatic event in American history shows how the skills of project management can be used in almost any situation. Former project manager George Meade used those skills to change the tide of the Civil War.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-10
... calibration runs. Record mechanical-displacement prover, master meter, or tank prover proof runs. Record... displacement prover & tank prover calibration reports. 1202(l)(2) Copy & submit royalty tank 45 minutes 2...
Laadan, Oren; Nieh, Jason; Phung, Dan
2012-10-02
Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.
Feechan, Angela; Kocsis, Marianna; Riaz, Summaira; Zhang, Wei; Gadoury, David M; Walker, M Andrew; Dry, Ian B; Reisch, Bruce; Cadle-Davidson, Lance
2015-08-01
The Toll/interleukin-1 receptor nucleotide-binding site leucine-rich repeat gene, "resistance to Uncinula necator 1" (RUN1), from Vitis rotundifolia was recently identified and confirmed to confer resistance to the grapevine powdery mildew fungus Erysiphe necator (syn. U. necator) in transgenic V. vinifera cultivars. However, sporulating powdery mildew colonies and cleistothecia of the heterothallic pathogen have been found on introgression lines containing the RUN1 locus growing in New York (NY). Two E. necator isolates collected from RUN1 vines were designated NY1-131 and NY1-137 and were used in this study to inform a strategy for durable RUN1 deployment. In order to achieve this, fitness parameters of NY1-131 and NY1-137 were quantified relative to powdery mildew isolates collected from V. rotundifolia and V. vinifera on vines containing alleles of the powdery mildew resistance genes RUN1, RUN2, or REN2. The results clearly demonstrate the race specificity of RUN1, RUN2, and REN2 resistance alleles, all of which exhibit programmed cell death (PCD)-mediated resistance. The NY1 isolates investigated were found to have an intermediate virulence on RUN1 vines, although this may be allele specific, while the Musc4 isolate collected from V. rotundifolia was virulent on all RUN1 vines. Another powdery mildew resistance locus, RUN2, was previously mapped in different V. rotundifolia genotypes, and two alleles (RUN2.1 and RUN2.2) were identified. The RUN2.1 allele was found to provide PCD-mediated resistance to both an NY1 isolate and Musc4. Importantly, REN2 vines were resistant to the NY1 isolates and RUN1REN2 vines combining both genes displayed additional resistance. Based on these results, RUN1-mediated resistance in grapevine may be enhanced by pyramiding with RUN2.1 or REN2; however, naturally occurring isolates in North America display some virulence on vines with these resistance genes. The characterization of additional resistance sources is needed to identify resistance gene combinations that will further enhance durability. For the resistance gene combinations currently available, we recommend using complementary management strategies, including fungicide application, to reduce populations of virulent isolates.
Guidelines for Project Management
NASA Technical Reports Server (NTRS)
Ben-Arieh, David
2001-01-01
Project management is an important part of the professional activities at Kennedy Space Center (KSC). Project management is the means by which many of the operations at KSC take shape. Moreover, projects at KSC are implemented in a variety of ways in different organizations. The official guidelines for project management are provided by NASA headquarters and are quite general. The project reported herein deals with developing practical and detailed project management guidelines in support of the project managers. This report summarizes the current project management effort in the Process Management Division and presents a new modeling approach of project management developed by the author. The report also presents the Project Management Guidelines developed during the summer.
76 FR 53414 - Pacific Fishery Management Council; Public Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-26
... Trailing Actions 7. Consider Inseason Adjustments--Part I 8. Emerging Issues Under Trawl Rationalization... Run Chinook Management Issues 2. 2011 Methodology Review I. Pacific Halibut Management 1. 2012 Pacific...
2012-03-01
4 Body ...Final Report requirement. 5 Body The approved Statement of Work proposed the following timeline (Table 1): Table 1. Timeline for...prosthesis designs (Figure 1) were tested for this project including the 1E90 Sprinter (OttoBock Inc.), Flex-Run (Ossur), Cheetah ® (Ossur) and Nitro
78 FR 3027 - Notice of Temporary Closures of Public Lands in La Paz County, AZ
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-15
... (CRIT) Reservation, the closed area runs east along Shea Road, then east into Osborne Wash on the Parker-Swansea Road to the Central Arizona Project (CAP) Canal, then north on the west side of the CAP Canal, crossing the canal on the county-maintained road, running northeast into Mineral Wash Canyon, then...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-24
... Supply Creek Hydroelectric Project. f. Location: The proposed Water Supply Creek Hydroelectric Project will be located on Water Supply Creek, near the town of Hoonah on Chichagof Island, Alaska, affecting T... proposed run-of-river Water Supply Creek Hydroelectric Project will consist of: (1) A proposed 8-foot- high...
Alternative Fuels Data Center: Installing New E85 Equipment
"milk run"). Hiring a Project Contractor In most cases, a fleet operator hires a project contractor to alter the onsite fueling system. This is often done through a bid process, especially if it is a fueling site operated by a government entity. The contractor is responsible for project oversight
The Maui's Dolphin Challenge: Lessons from a School-Based Litter Reduction Project
ERIC Educational Resources Information Center
Townrow, Carly S.; Laurence, Nick; Blythe, Charlotte; Long, Jenny; Harré, Niki
2016-01-01
The Maui's Dolphin Challenge was a litter reduction project that was run twice at a secondary school in Aotearoa New Zealand. The project drew on a theoretical framework encompassing four psycho-social principles: values, embodied learning, efficacy, and perceived social norms. It challenged students to reduce the litter at the school by offering…
Ray, Sumantra; Laur, Celia; Douglas, Pauline; Rajput-Ray, Minha; van der Es, Mike; Redmond, Jean; Eden, Timothy; Sayegh, Marietta; Minns, Laura; Griffin, Kate; McMillan, Colin; Adiamah, Alfred; Gillam, Stephen; Gandy, Joan
2014-05-29
One in four adults are estimated to be at medium to high risk of malnutrition when screened using the 'Malnutrition Universal Screening Tool' upon admission to hospital in the United Kingdom. The Need for Nutrition Education/Education Programme (NNEdPro) Group was developed to address this issue and the Nutrition Education and Leadership for Improved Clinical Outcomes (NELICO) is a project within this group.The objective of NELICO was to assess whether an intensive training intervention combining clinical and public health nutrition, organisational management and leadership strategies, could equip junior doctors to contribute to improvement in nutrition awareness among healthcare professionals in the National Health Service in England. Three junior doctors were self-selected from the NNEdPro Group original training. Each junior doctor recruited three additional team members to attend an intensive training weekend incorporating nutrition, change management and leadership. This equipped them to run nutrition awareness weeks in their respective hospitals. Knowledge, attitudes and practices were evaluated at baseline as well as one and four months post-training as a quality assurance measure. The number and type of educational events held, pre-awareness week Online Hospital Survey results, attendance and qualitative feedback from training sessions, effectiveness of dissemination methods such as awareness stalls, Hospital Nutrition Attitude Survey results and overall feedback were also used to determine impact. When the weighted average score for knowledge, attitudes and practices at baseline was compared with four months post-intervention scores, there was a significant increase in the overall score (p = 0.03). All three hospital teams conducted an effective nutrition awareness week, as determined by qualitative data collected from interviews and feedback from educational sessions. The NELICO project and its resulting nutrition awareness weeks were considered innovative in terms of concept and content. It was considered useful, both for the junior doctors who showed improvement in their nutrition knowledge and reported enthusiasm and for the hospital setting, increasing awareness of clinical and public health nutrition among healthcare professionals. The NELICO project is one innovative method to promote nutrition awareness in tomorrow's doctors and shows they have the enthusiasm and drive to be nutrition champions.
Safety management for polluted confined space with IT system: a running case.
Hwang, Jing-Jang; Wu, Chien-Hsing; Zhuang, Zheng-Yun; Hsu, Yi-Chang
2015-01-01
This study traced a deployed real IT system to enhance occupational safety for a polluted confined space. By incorporating wireless technology, it automatically monitors the status of workers on the site and upon detected anomalous events, managers are notified effectively. The system, with a redefined standard operations process, is running well at one of Formosa Petrochemical Corporation's refineries. Evidence shows that after deployment, the system does enhance the safety level by real-time monitoring the workers and by managing well and controlling the anomalies. Therefore, such technical architecture can be applied to similar scenarios for safety enhancement purposes.
... Center Fellows-in-Training Grants & Awards Program Directors Practice Resources ASTHMA IQ Consultation and Referral Guidelines Practice Management Tips Practice Management Workshop Practice Tools Running ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trentham, R. C.; Stoudt, E. L.
CO{sub 2} Enhanced Oil Recovery, Sequestration, & Monitoring Measuring & Verification are topics that are not typically covered in Geoscience, Land Management, and Petroleum Engineering curriculum. Students are not typically exposed to the level of training that would prepare them for CO{sub 2} reservoir and aquifer sequestration related projects when they begin assignments in industry. As a result, industry training, schools & conferences are essential training venues for new & experienced personnel working on CO{sub 2} projects for the first time. This project collected and/or generated industry level CO{sub 2} training to create modules which faculties can utilize as presentations,more » projects, field trips and site visits for undergrad and grad students and prepare them to "hit the ground running" & be contributing participants in CO{sub 2} projects with minimal additional training. In order to create the modules, UTPB/CEED utilized a variety of sources. Data & presentations from industry CO{sub 2} Flooding Schools & Conferences, Carbon Management Workshops, UTPB Classes, and other venues was tailored to provide introductory reservoir & aquifer training, state-of-the-art methodologies, field seminars and road logs, site visits, and case studies for students. After discussions with faculty at UTPB, Sul Ross, Midland College, other universities, and petroleum industry professionals, it was decided to base the module sets on a series of road logs from Midland to, and through, a number of Permian Basin CO{sub 2} Enhanced Oil Recovery (EOR) projects, CO{sub 2} Carbon Capture and Storage (CCUS) projects and outcrop equivalents of the formations where CO{sub 2} is being utilized or will be utilized, in EOR projects in the Permian Basin. Although road logs to and through these projects exist, none of them included CO{sub 2} specific information. Over 1400 miles of road logs were created, or revised specifically to highlight CO{sub 2} EOR projects. After testing a number of different entry points into the data set with students and faculty form a number of different universities, it was clear that a standard website presentation with a list of available power point presentations, excel spreadsheets, word documents and pdf's would not entice faculty, staff, and students at universities to delve deeper into the website http://www.utpb.edu/ceed/student modules.« less
A new climate modeling framework for convection-resolving simulation at continental scale
NASA Astrophysics Data System (ADS)
Charpilloz, Christophe; di Girolamo, Salvatore; Arteaga, Andrea; Fuhrer, Oliver; Hoefler, Torsten; Schulthess, Thomas; Schär, Christoph
2017-04-01
Major uncertainties remain in our understanding of the processes that govern the water cycle in a changing climate and their representation in weather and climate models. Of particular concern are heavy precipitation events of convective origin (thunderstorms and rain showers). The aim of the crCLIM project [1] is to propose a new climate modeling framework that alleviates the I/O-bottleneck in large-scale, convection-resolving climate simulations and thus to enable new analysis techniques for climate scientists. Due to the large computational costs, convection-resolving simulations are currently restricted to small computational domains or very short time scales, unless the largest available supercomputers system such as hybrid CPU-GPU architectures are used [3]. Hence, the COSMO model has been adapted to run on these architectures for research and production purposes [2]. However, the amount of generated data also increases and storing this data becomes infeasible making the analysis of simulations results impractical. To circumvent this problem and enable high-resolution models in climate we propose a data-virtualization layer (DVL) that re-runs simulations on demand and transparently manages the data for the analysis, that means we trade off computational effort (time) for storage (space). This approach also requires a bit-reproducible version of the COSMO model that produces identical results on different architectures (CPUs and GPUs) [4] that will be coupled with a performance model in order enable optimal re-runs depending on requirements of the re-run and available resources. In this contribution, we discuss the strategy to develop the DVL, a first performance model, the challenge of bit-reproducibility and the first results of the crCLIM project. [1] http://www.c2sm.ethz.ch/research/crCLIM.html [2] O. Fuhrer, C. Osuna, X. Lapillonne, T. Gysi, M. Bianco, and T. Schulthess. "Towards gpu-accelerated operational weather forecasting." In The GPU Technology Conference, GTC. 2013. [3] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, and C. Schär. "Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19." Geoscientific Model Development 9, no. 9 (2016): 3393. [4] A. Arteaga, O. Fuhrer, and T. Hoefler. "Designing bit-reproducible portable high-performance applications." In Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, pp. 1235-1244. IEEE, 2014.
Surface EMG system for use in long-term vigorous activities
NASA Astrophysics Data System (ADS)
de Luca, G.; Bergman, P.; de Luca, C.
The purpose of the project was to develop an advanced surface electromyographic (EMG) system that is portable, un-tethered, and able to detect high-fidelity EMG signals from multiple channels. The innovation was specifically designed to extend NASA's capability to perform neurological status monitoring for long-term, vigorous activities. These features are a necessary requirement of ground-based and in-flight studies planned for the International Space Station and human expeditions to Mars. The project consisted of developing 1) a portable EMG digital data logger using a handheld PC for acquiring the signal and storing the data from as many as 8 channels, and 2) an EMG electrode/skin interface to improve signal fidelity and skin adhesion in the presence of sweat and mechanical disturbances encountered during vigorous activities. The system, referred to as a MyoMonitor, was configured with a communication port for downloading the data from the data logger to the PC computer workstation. Software specifications were developed and implemented for programming of acquisition protocols, power management, and transferring data to the PC for processing and graphical display. The prototype MyoMonitor was implemented using a handheld PC that features a color LCD screen, enhanced keyboard, extended Lithium Ion battery and recharger, and 128 Mbytes of F ash Memory. The system was designed to be belt-worn,l thereby allowing its use under vigorous activities. The Monitor utilizes up to 8 differential surface EMG sensors. The prototype allowed greater than 2 hours of continuous 8-channel EMG data to be collected, or 17.2 hours of continuous single channel EMG data. Standardized tests in human subjects were conducted to develop the mechanical and electrical properties of the prototype electrode/interface system. Tests conducted during treadmill running and repetitive lifting demonstrated that the prototype interface significantly reduced the detrimental effects of sweat accumulation on signal fidelity. The average number of artifacts contaminating the EMG signals during treadmill running was reduced approximat ely three-fold by the prototype electrode/interface, when compared to methods currently available. Peel adhesion of the interface to the skin was significantly improved for treadmill running. Similarly, the artifacts from controlled impacts on the electrode housing were significantly reduced for both treadmill running and for the repetitive lifting task.
Sustainable Development: A Strategy for Regaining Control of Northern Mali
2014-06-01
informal attempts to conduct evasive maneuvers to achieve desired end results. The Project for National Security Reform argued that at times “… end runs...recognizing the internal borders that France established in the early twentieth century . Still, Model II optimally assigns projects based on... Project Design 4. In the end , Model I allocated the projects while addressing the following supplemental research questions posed in chapters I and
DOT National Transportation Integrated Search
1976-01-01
Three construction projects affecting streams are being monitored. On two of the projects, those affecting Meadow Run and Moores Creek, the streams are being monitored for flow, suspended solids, rainfall, and benthic populations. Construction has be...
[Groupamatic 360 C1 and automated blood donor processing in a transfusion center].
Guimbretiere, J; Toscer, M; Harousseau, H
1978-03-01
Automation of donor management flow path is controlled by: --a 3 slip "port a punch" card, --the groupamatic unit with a result sorted out on punch paper tape, --the management computer off line connected to groupamatic. Data tracking at blood collection time is made by punching a card with the donor card used as a master card. Groupamatic performs: --a standard blood grouping with one run for registered donors and two runs for new donors, --a phenotyping with two runs, --a screening of irregular antibodies. Themanagement computer checks the correlation between the data of the two runs or the data of a single run and that of previous file. It updates the data resident in the central file and prints out: --the controls of the different blood group for the red cell panel, --The listing of error messages, --The listing of emergency call up, --The listing of collected blood units when arrived at the blood center, with quantitative and qualitative information such as: number of blood, units collected, donor addresses, etc., --Statistics, --Donor cards, --Diplomas.
GEANT4 distributed computing for compact clusters
NASA Astrophysics Data System (ADS)
Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.
2014-11-01
A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.
Vercellesi, L
1999-01-01
Introduction In 1998 a pharmaceutical company published its Web site to provide: an institutional presence multifunctional information to primary customers and general public a new way of access to the company a link to existing company-sponsored sites a platform for future projects Since the publication, some significant integration have been added; in particular one is a primary interactive service, addressed to a selected audience. The need has been felt to foster new projects and establish the idea of routinely considering the site as a potential tool in the marketing mix, to provide advanced services to customers. Methods Re-assessment of the site towards objectives. Assessment of its perception with company potential suppliers. Results The issue "web use" was discussed in various management meetings; the trend of use of Internet among the primary customers was known; major concerns expressed were about staffing and return of investment for activities run in the Web. These perceptions are being addressed by making the company more comfortable by: Running the site through a detailed process and clear procedures, defining A new process of maintenance of the site, involving representatives of all the functions. Procedures and guidelines. A master file of approved answers and company contacts. Categories of activities (information, promotion, education, information to investors, general services, target-specific services). Measures for all the activities run in the Web site Specifically for the Web site a concise periodical report is being assessed, covering 1. Statistics about hits and mails, compared to the corporate data. Indication of new items published. Description by the "supplier" of new or ongoing innovative projects, to transfer best practice. Basic figures on the Italian trend in internet use and specifically in the pharmaceutical and medical fields. Comments to a few competitor sites. Examples of potential uses deriving from other Web sites. Discussion The comparatively low use of Internet in Italy has affected the systematic professional exploitation of the company site. The definition of "anarchic" commonly linked to the Web by local media has lead to the attempt to "master" and "normalize" the site with a stricter approach than usual: most procedures and guidelines have been designed from scratch as not available for similar activities traditionally run. A short set of information has been requested for inclusion in the report: its wide coverage will help to receive a flavour of the global parallel new world developing in the net. Hopefully this approach will help to create a comfortable attitude towards the medium in the whole organisation and to acquire a working experience with the net.
NASA Technical Reports Server (NTRS)
Johnson, Charles S.
1986-01-01
The embedded systems running real-time applications, for which Ada was designed, require their own mechanisms for the management of dynamically allocated storage. There is a need for packages which manage their own internalo structures to control their deallocation as well, due to the performance implications of garbage collection by the KAPSE. This places a requirement upon the design of generic packages which manage generically structured private types built-up from application-defined input types. These kinds of generic packages should figure greatly in the development of lower-level software such as operating systems, schedulers, controllers, and device driver; and will manage structures such as queues, stacks, link-lists, files, and binary multary (hierarchical) trees. Controlled to prevent inadvertent de-designation of dynamic elements, which is implicit in the assignment operation A study was made of the use of limited private type, in solving the problems of controlling the accumulation of anonymous, detached objects in running systems. The use of deallocator prodecures for run-down of application-defined input types during deallocation operations during satellites.
2017-12-01
fastened to the deck surface, with spaces approximately every 6 ft (1.8 m) to allow water to run off... run the length of the bridge, touching edge to edge. The girders are through bolted to the pile caps. Decking is affixed to the girders with deck...fastened to the deck surface, with spaces approximately every 6 ft (1.8 m) to allow water to run off. ERDC/CERL TR-17-45 49 Figure 43. Holes
Using Pilots to Assess the Value and Approach of CMMI Implementation
NASA Technical Reports Server (NTRS)
Godfrey, Sara; Andary, James; Rosenberg, Linda
2002-01-01
At Goddard Space Flight Center (GSFC), we have chosen to use Capability Maturity Model Integrated (CMMI) to guide our process improvement program. Projects at GSFC consist of complex systems of software and hardware that control satellites, operate ground systems, run instruments, manage databases and data and support scientific research. It is a challenge to launch a process improvement program that encompasses our diverse systems, yet is manageable in terms of cost effectiveness. In order to establish the best approach for improvement, our process improvement effort was divided into three phases: 1) Pilot projects; 2) Staged implementation; and 3) Sustainment and continual improvement. During Phase 1 the focus of the activities was on a baselining process, using pre-appraisals in order to get a baseline for making a better cost and effort estimate for the improvement effort. Pilot pre-appraisals were conducted from different perspectives so different approaches for process implementation could be evaluated. Phase 1 also concentrated on establishing an improvement infrastructure and training of the improvement teams. At the time of this paper, three pilot appraisals have been completed. Our initial appraisal was performed in a flight software area, considering the flight software organization as the organization. The second appraisal was done from a project perspective, focusing on systems engineering and acquisition, and using the organization as GSFC. The final appraisal was in a ground support software area, again using GSFC as the organization. This paper will present our initial approach, lessons learned from all three pilots and the changes in our approach based on the lessons learned.
Umatilla Basin Natural Production Monitoring and Evaluation; 2003-2004 Annual Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, Jesse D.M.; Contor, Craig C.; Hoverson, Eric
2005-10-01
The Umatilla Basin Natural Production Monitoring and Evaluation Project (UBNPMEP) is funded by Bonneville Power Administration (BPA) as directed by section 4(h) of the Pacific Northwest Electric Power Planning and Conservation Act of 1980 (P. L. 96-501). This project is in accordance with and pursuant to measures 4.2A, 4.3C.1, 7.1A.2, 7.1C.3, 7.1C.4 and 7.1D.2 of the Northwest Power Planning Council's (NPPC) Columbia River Basin Fish and Wildlife Program (NPPC 1994). Work was conducted by the Fisheries Program of the Confederated Tribes of the Umatilla Indian Reservation (CTUIR). UBNPMEP is coordinated with two ODFW research projects that also monitor and evaluatemore » the success of the Umatilla Fisheries Restoration Plan. Our project deals with the natural production component of the plan, and the ODFW projects evaluate hatchery operations (project No. 19000500, Umatilla Hatchery M & E) and smolt outmigration (project No. 198902401, Evaluation of Juvenile Salmonid Outmigration and Survival in the Lower Umatilla River). Collectively these three projects comprehensively monitor and evaluate natural and hatchery salmonid production in the Umatilla River Basin. Table 1 outlines relationships with other BPA supported projects. The need for natural production monitoring has been identified in multiple planning documents including Wy-Kan-Ush-Mi Wa-Kish-Wit Volume I, 5b-13 (CRITFC 1996), the Umatilla Hatchery Master Plan (CTUIR & ODFW 1990), the Umatilla Basin Annual Operation Plan (ODFW and CTUIR 2004), the Umatilla Subbasin Summary (CTUIR & ODFW 2001), the Subbasin Plan (CTUIR & ODFW 2004), and the Comprehensive Research, Monitoring, and Evaluation Plan (Schwartz & Cameron Under Revision). Natural production monitoring and evaluation is also consistent with Section III, Basinwide Provisions, Strategy 9 of the 2000 Columbia River Basin Fish and Wildlife Program (NPPC 1994, NPPC 2004). The need for monitoring the natural production of salmonids in the Umatilla River Basin developed with the efforts to restore natural populations of spring and fall Chinook salmon, (Oncorhynchus tshawytsha) coho salmon and (O. kisutch) and enhance summer steelhead (O. mykiss). The need for restoration began with agricultural development in the early 1900's that extirpated salmon and reduced steelhead runs (BOR 1988). The most notable development was the construction and operation of Three-Mile Falls Dam (3MD) and other irrigation projects that dewatered the Umatilla River during salmon migrations. The Confederated Tribes of the Umatilla Indian Reservation (CTUIR) and the Oregon Department of Fish and Wildlife (ODFW) developed the Umatilla Hatchery Master Plan to restore the historical fisheries in the basin. The plan was completed in 1990 and included the following objectives: (1) Establish hatchery and natural runs of Chinook and coho salmon. (2) Enhance existing summer steelhead populations through a hatchery program. (3) Provide sustainable tribal and non-tribal harvest of salmon and steelhead. (4) Maintain the genetic characteristics of salmonids in the Umatilla River Basin. (5) Produce almost 48,000 adult returns to Three-Mile Falls Dam. The goals were reviewed in 1999 and were changed to 31,500 adult salmon and steelhead returns (Table 2). We conduct core long-term monitoring activities each year as well as two and three-year projects that address special needs for adaptive management. Examples of these projects include adult passage evaluations (Contor et al. 1995, Contor et al. 1996, Contor et al. 1997, Contor et al. 1998), genetic monitoring (Currens & Schreck 1995, Narum et al. 2004), and habitat assessment surveys (Contor et al. 1995, Contor et al. 1996, Contor et al. 1997, Contor et al. 1998). Our project goal is to provide quality information to managers and researchers working to restore anadromous salmonids to the Umatilla River Basin. This is the only project that monitors the restoration of naturally producing salmon and steelhead in the basin.« less
Harborne, A R; Afzal, D C; Andrews, M J
2001-12-01
The coast of Honduras, Central America, represents the southern end of the Mesoamerican Barrier Reef System, although its marine resources are less extensive and studied than nearby Belize and Mexico. However, the coastal zone contains mainland reef formations, mangroves, wetlands, seagrass beds and extensive fringing reefs around its offshore islands, and has a key role in the economy of the country. Like most tropical areas, this complex of benthic habitats experiences limited annual variation in climatic and oceanographic conditions but seasonal and occasional conditions, particularly coral bleaching and hurricanes, are important influences. The effects of stochastic factors on the country's coral reefs were clearly demonstrated during 1998 when Honduras experienced a major hurricane and bleaching event. Any natural or anthropogenic impacts on reef health will inevitably affect other countries in Latin America, and vice versa, since the marine resources are linked via currents and the functioning of the system transcends political boundaries. Much further work on, for example, movement of larvae and transfer of pollutants is required to delineate the full extent of these links. Anthropogenic impacts, largely driven by the increasing population and proportion of people living in coastal areas, are numerous and include key factors such as agricultural run-off, over-fishing, urban and industrial pollution (particularly sewage) and infrastructure development. Many of these threats act synergistically and, for example, poor watershed management via shifting cultivation, increases sedimentation and pesticide run-off onto coral reefs, which increases stress to corals already affected by decreasing water quality and coral bleaching. Threats from agriculture and fishing are particularly significant because of the size of both industries. The desire to generate urgently required revenue within Honduras has also led to increased tourism which provides an overarching stress to marine resources since most tourists spend time in the coastal zone. Hence the last decade has seen a dramatic increase in coastal development, a greater requirement for sewage treatment and more demand for freshwater, particularly in the Bay Islands. Although coastal zone management is relatively recent in Honduras, it is gaining momentum from both large-scale initiatives, such as the Ministry of Tourism's 'Bay Islands Environmental Management Project', and national and international NGO projects. For example, a series of marine protected areas and legislative regulations have been established, but management capacity, enforcement and monitoring are limited by funding, expertise and training. Existing and future initiatives, supported by increased political will and environmental awareness of stakeholders, are vital for the long-term economic development of the country.
A Core Plug and Play Architecture for Reusable Flight Software Systems
NASA Technical Reports Server (NTRS)
Wilmot, Jonathan
2006-01-01
The Flight Software Branch, at Goddard Space Flight Center (GSFC), has been working on a run-time approach to facilitate a formal software reuse process. The reuse process is designed to enable rapid development and integration of high-quality software systems and to more accurately predict development costs and schedule. Previous reuse practices have been somewhat successful when the same teams are moved from project to project. But this typically requires taking the software system in an all-or-nothing approach where useful components cannot be easily extracted from the whole. As a result, the system is less flexible and scalable with limited applicability to new projects. This paper will focus on the rationale behind, and implementation of the run-time executive. This executive is the core for the component-based flight software commonality and reuse process adopted at Goddard.
Two Eyes, 3D: A New Project to Study Stereoscopy in Astronomy Education
NASA Astrophysics Data System (ADS)
Price, Aaron; SubbaRao, M.; Wyatt, R.
2012-01-01
"Two Eyes, 3D" is a 3-year NSF funded research project to study the educational impacts of using stereoscopic representations in informal settings. The project funds two experimental studies. The first is focused on how children perceive various spatial qualities of scientific objects displayed in static 2D and 3D formats. The second is focused on how adults perceive various spatial qualities of scientific objects and processes displayed in 2D and 3D movie formats. As part of the project, two brief high-definition films about variable stars will be developed. Both studies will be mixed-method and look at prior spatial ability and other demographic variables as covariates. The project is run by the American Association of Variable Star Observers, Boston Museum of Science and the Adler Planetarium and Astronomy Museum with consulting from the California Academy of Sciences. Early pilot results will be presented. All films will be released into the public domain, as will the assessment software designed to run on tablet computers (iOS or Android).
NASA Astrophysics Data System (ADS)
Valentic, T. A.
2012-12-01
The Data Transport Network is designed for the delivery of data from scientific instruments located at remote field sites with limited or unreliable communications. Originally deployed at the Sondrestrom Research Facility in Greenland over a decade ago, the system supports the real-time collection and processing of data from large instruments such as incoherent scatter radars and lidars. In recent years, the Data Transport Network has been adapted to small, low-power embedded systems controlling remote instrumentation platforms deployed throughout the Arctic. These projects include multiple buoys from the O-Buoy, IceLander and IceGoat programs, renewable energy monitoring at the Imnavait Creek and Ivotuk field sites in Alaska and remote weather observation stations in Alaska and Greenland. This presentation will discuss the common communications controller developed for these projects. Although varied in their application, each of these systems share a number of common features. Multiple instruments are attached, each of which needs to be power controlled, data sampled and files transmitted offsite. In addition, the power usage of the overall system must be minimized to handle the limited energy available from sources such as solar, wind and fuel cells. The communications links are satellite based. The buoys and weather stations utilize Iridium, necessitating the need to handle the common drop outs and high-latency, low-bandwidth nature of the link. The communications controller is an off-the-shelf, low-power, single board computer running a customized version of the Linux operating system. The Data Transport Network provides a Python-based software framework for writing individual data collection programs and supplies a number of common services for configuration, scheduling, logging, data transmission and resource management. Adding a new instrument involves writing only the necessary code for interfacing to the hardware. Individual programs communicate with the system services using XML-RPC. The scheduling algorithms have access the current position and power levels, allowing for instruments such as cameras to only be run during daylight hours or when sufficient power is available. The resource manager monitors the use of common devices such as the USB bus or Ethernet ports, and can power them down when they are not being used. This management lets us drop the power consumption from an average of 1W to 250mW.
Fredriksen, Per Morten; Mamen, Asgeir; Gammelsrud, Heidi; Lindberg, Morten; Hjelle, Ole Petter
2018-05-01
The purpose of this study was to examine factors affecting running performance in children. A cross-sectional study exploring the relationships between height, weight, waist circumference, muscle mass, body fat percentage, relevant biomarkers, and the Andersen intermittent running test in 2272 children aged 6 to 12 years. Parental education level was used as a non-physiological explanatory variable. Mean values (SD) and percentiles are presented as reference values. Height (β = 6.4, p < .0001), high values of haemoglobin (β = 18, p = .013) and low percentage of body fat (β = -7.5, p < .0001) showed an association with results from the running test. In addition, high parental education level showed a positive association with the running test. Boys display better running performance than girls at all age ages, except 7 years old, probably because of additional muscle mass and less fatty tissue. Height and increased level of haemoglobin positively affected running performance. Lower body fat percentage and high parental education level correlated with better running performance.
ERIC Educational Resources Information Center
Greer, Leslie
1977-01-01
The Sociedade Brasileira de Cultura Inglesa of Sao Paolo, Brazil, is an English teaching center which also runs an introductory course to train teachers of English. This article describes some of the projects completed by prospective teachers; they include language games, pictures, cartoons, role-playing and writing creative dialogue. (CHK)
Physics Parameterization for Seasonal Prediction
2013-09-30
particularly the Madden Julian Oscillation (MJO). We are continuing our participation in the project “Vertical Structure and Diabatic Processes of...Results are shown for: a) TRMM rainfall, b) NAVGEM 20-year run submitted for the YOTC/GEWEX project “Vertical Structure and Diabatic Processes of the MJO
Stream Restoration to Manage Nutrients in Degraded Watersheds
Historic land-use change can reduce water quality by impairing the ability of stream ecosystems to efficiently process nutrients such as nitrogen. Study results of two streams (Minebank Run and Big Spring Run) affected by urbanization, quarrying, agriculture, and impoundments in...
Opening Educational Practices in Scotland (OEPS)
ERIC Educational Resources Information Center
Cannell, Pete; Page, Anna; Macintyre, Ronald
2016-01-01
OEPS is a cross-sector project led by the Open University in Scotland (OUiS) and funded by the Scottish Funding Council. The project began in late spring 2014 and runs until the end of July 2017. It has its origins in OER projects carried out by the OUiS over the preceding four years. In most cases these involved close partnership between the…
Project ELaNa and NASA's CubeSat Initiative
NASA Technical Reports Server (NTRS)
Skrobot, Garrett Lee
2010-01-01
This slide presentation reviews the NASA program to use expendable lift vehicles (ELVs) to launch nanosatellites for the purpose of enhancing educational research. The Education Launch of Nanosatellite (ELaNa) project, run out of the Launch Services Program is requesting proposals for CubeSat type payload to provide information that will aid or verify NASA Projects designs while providing higher educational research
ERIC Educational Resources Information Center
Wu, Chengqing; Chanda, Emmanuel; Willison, John
2014-01-01
Honours research projects in the School of Civil, Environmental and Mining Engineering at the University of Adelaide are run with small groups of students working with an academic supervisor in a chosen area for one year. The research project is mainly self-directed study, which makes it very difficult to fairly assess the contribution of…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-20
... end of the headrace where it runs diagonally across the main channel of the river approximately 4,970... not used under normal run-of-river operation. The normal water surface elevation of the project...-3 are vertical-shaft, fixed-blade, Kaplan turbines; unit 4 is a vertical-shaft, manually adjustable...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
... it runs diagonally across the main channel of the river approximately 4,970 feet to the west shore of... normal run-of-river operation. The normal water surface elevation of the project impoundment is 276.5... appurtenant equipment. The hydraulic equipment for units 1-3 are vertical-shaft, fixed-blade, Kaplan turbines...
The Scylla Multi-Code Comparison Project
NASA Astrophysics Data System (ADS)
Maller, Ariyeh; Stewart, Kyle; Bullock, James; Oñorbe, Jose; Scylla Team
2016-01-01
Cosmological hydrodynamical simulations are one of the main techniques used to understand galaxy formation and evolution. However, it is far from clear to what extent different numerical techniques and different implementations of feedback yield different results. The Scylla Multi-Code Comparison Project seeks to address this issue by running idenitical initial condition simulations with different popular hydrodynamic galaxy formation codes. Here we compare simulations of a Milky Way mass halo using the codes enzo, ramses, art, arepo and gizmo-psph. The different runs produce galaxies with a variety of properties. There are many differences, but also many similarities. For example we find that in all runs cold flow disks exist; extended gas structures, far beyond the galactic disk, that show signs of rotation. Also, the angular momentum of warm gas in the halo is much larger than the angular momentum of the dark matter. We also find notable differences between runs. The temperature and density distribution of hot gas can differ by over an order of magnitude between codes and the stellar mass to halo mass relation also varies widely. These results suggest that observations of galaxy gas halos and the stellar mass to halo mass relation can be used to constarin the correct model of feedback.
Evaluation and treatment of biking and running injuries.
Oser, Sean M; Oser, Tamara K; Silvis, Matthew L
2013-12-01
Exercise is universally recognized as a key feature for maintaining good health. Likewise, lack of physical activity is a major risk factor for chronic disease and disability, an especially important fact considering our rapidly aging population. Biking and running are frequently recommended as forms of exercise. As more individuals participate in running-related and cycling-related activities, physicians must be increasingly aware of the common injuries encountered in these pursuits. This review focuses on the evaluation and management of common running-related and cycling-related injuries. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere
2006-02-01
To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.
An enhanced Ada run-time system for real-time embedded processors
NASA Technical Reports Server (NTRS)
Sims, J. T.
1991-01-01
An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.
Aozan: an automated post-sequencing data-processing pipeline.
Perrin, Sandrine; Firmo, Cyril; Lemoine, Sophie; Le Crom, Stéphane; Jourdren, Laurent
2017-07-15
Data management and quality control of output from Illumina sequencers is a disk space- and time-consuming task. Thus, we developed Aozan to automatically handle data transfer, demultiplexing, conversion and quality control once a run has finished. This software greatly improves run data management and the monitoring of run statistics via automatic emails and HTML web reports. Aozan is implemented in Java and Python, supported on Linux systems, and distributed under the GPLv3 License at: http://www.outils.genomique.biologie.ens.fr/aozan/ . Aozan source code is available on GitHub: https://github.com/GenomicParisCentre/aozan . aozan@biologie.ens.fr. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
49 CFR 633.27 - Implementation of a project management plan.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 7 2010-10-01 2010-10-01 false Implementation of a project management plan. 633... TRANSIT ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROJECT MANAGEMENT OVERSIGHT Project Management Plans § 633.27 Implementation of a project management plan. (a) Upon approval of a project management plan by...
Agile Project Management for e-Learning Developments
ERIC Educational Resources Information Center
Doherty, Iain
2010-01-01
We outline the project management tactics that we developed in praxis in order to manage elearning projects and show how our tactics were enhanced through implementing project management techniques from a formal project management methodology. Two key factors have contributed to our project management success. The first is maintaining a clear…
49 CFR 633.25 - Contents of a project management plan.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 7 2010-10-01 2010-10-01 false Contents of a project management plan. 633.25... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROJECT MANAGEMENT OVERSIGHT Project Management Plans § 633.25 Contents of a project management plan. At a minimum, a recipient's project management plan shall include...
49 CFR 633.25 - Contents of a project management plan.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 7 2012-10-01 2012-10-01 false Contents of a project management plan. 633.25... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROJECT MANAGEMENT OVERSIGHT Project Management Plans § 633.25 Contents of a project management plan. At a minimum, a recipient's project management plan shall include...
49 CFR 633.25 - Contents of a project management plan.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 7 2014-10-01 2014-10-01 false Contents of a project management plan. 633.25... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROJECT MANAGEMENT OVERSIGHT Project Management Plans § 633.25 Contents of a project management plan. At a minimum, a recipient's project management plan shall include...
49 CFR 633.27 - Implementation of a project management plan.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 7 2011-10-01 2011-10-01 false Implementation of a project management plan. 633... TRANSIT ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROJECT MANAGEMENT OVERSIGHT Project Management Plans § 633.27 Implementation of a project management plan. (a) Upon approval of a project management plan by...
49 CFR 633.27 - Implementation of a project management plan.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 7 2012-10-01 2012-10-01 false Implementation of a project management plan. 633... TRANSIT ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROJECT MANAGEMENT OVERSIGHT Project Management Plans § 633.27 Implementation of a project management plan. (a) Upon approval of a project management plan by...
49 CFR 633.27 - Implementation of a project management plan.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 7 2014-10-01 2014-10-01 false Implementation of a project management plan. 633... TRANSIT ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROJECT MANAGEMENT OVERSIGHT Project Management Plans § 633.27 Implementation of a project management plan. (a) Upon approval of a project management plan by...
49 CFR 633.25 - Contents of a project management plan.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 7 2013-10-01 2013-10-01 false Contents of a project management plan. 633.25... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROJECT MANAGEMENT OVERSIGHT Project Management Plans § 633.25 Contents of a project management plan. At a minimum, a recipient's project management plan shall include...
49 CFR 633.27 - Implementation of a project management plan.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 7 2013-10-01 2013-10-01 false Implementation of a project management plan. 633... TRANSIT ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROJECT MANAGEMENT OVERSIGHT Project Management Plans § 633.27 Implementation of a project management plan. (a) Upon approval of a project management plan by...
A local scale assessment of the climate change sensitivity of snow in Pyrenean ski resorts
NASA Astrophysics Data System (ADS)
Pesado, Cristina; Pons, Marc; Vilella, Marc; López-Moreno, Juan Ignacio
2016-04-01
The Pyrenees host one of the largest ski area in Europe after the Alps that encompasses the mountain area of the south of France, the north of Spain and the small country of Andorra. In this region, winter tourism is one of the main source of income and driving force of local development on these mountain communities. However, this activity was identified as one of the most vulnerable to a future climate change due to the projected decrease of natural snow and snowmaking capacity. However, within the same ski resorts different areas showed to have a very different vulnerability within the same resort based on the geographic features of the area and the technical management of the slopes. Different areas inside a same ski resort could have very different vulnerability to future climate change based on aspect, steepness or elevation. Furthermore, the technical management of ski resorts, such as snowmaking and grooming were identified to have a significant impact on the response of the snowpack in a warmer climate. In this line, two different ski resorts were deeply analyzed taken into account both local geographical features as well as the effect of the technical management of the runs. Principal Component Analysis was used to classify the main areas of the resort based on the geographic features (elevation, aspect and steepness) and identify the main representative areas with different local features. Snow energy and mass balance was simulated in the different representative areas using the Cold Regions Hydrological Model (CRHM) assuming different magnitudes of climate warming (increases of 2°C and 4°C in the mean winter temperature) both in natural conditions and assuming technical management of the slopes. Theses first results showed the different sensitivity and vulnerability to climate changes based on the local geography of the resort and the management of the ski runs, showing the importance to include these variables when analyzing the local vulnerability of a ski resort and the potential adaptation measures in each particular case.
Development and validation of the European Cluster Assimilation Techniques run libraries
NASA Astrophysics Data System (ADS)
Facskó, G.; Gordeev, E.; Palmroth, M.; Honkonen, I.; Janhunen, P.; Sergeev, V.; Kauristie, K.; Milan, S.
2012-04-01
The European Commission funded the European Cluster Assimilation Techniques (ECLAT) project as a collaboration of five leader European universities and research institutes. A main contribution of the Finnish Meteorological Institute (FMI) is to provide a wide range global MHD runs with the Grand Unified Magnetosphere Ionosphere Coupling simulation (GUMICS). The runs are divided in two categories: Synthetic runs investigating the extent of solar wind drivers that can influence magnetospheric dynamics, as well as dynamic runs using measured solar wind data as input. Here we consider the first set of runs with synthetic solar wind input. The solar wind density, velocity and the interplanetary magnetic field had different magnitudes and orientations; furthermore two F10.7 flux values were selected for solar radiation minimum and maximum values. The solar wind parameter values were constant such that a constant stable solution was archived. All configurations were run several times with three different (-15°, 0°, +15°) tilt angles in the GSE X-Z plane. The result of the 192 simulations named so called "synthetic run library" were visualized and uploaded to the homepage of the FMI after validation. Here we present details of these runs.
Marques, M; Hogland, W
2001-02-01
Stormwater run-off from twelve different areas and roads has been characterized in a modern waste disposal site, where several waste management activities are carried out. Using nonparametric statistics, medians and confidence intervals of the medians, 22 stormwater quality parameters were calculated. Suspended solids, chemical oxygen demand, biochemical oxygen demand, total nitrogen and total phosphorus, as well as run-off from several areas, showed measured values above standard limits for discharge into recipient waters--even higher than those of leachate from covered landfill cells. Of the heavy metals analyzed, copper, zinc and nickel were the most prevalent, being detected in every sample. Higher concentrations of metals such as zinc, nickel, cobalt, iron and cadmium were found in run-off from composting areas, compared to areas containing stored and exposed scrap metal. This suggests that factors other than the total amount of exposed material affect the concentration of metals in run-off, such as binding to organic compounds and hydrological transport efficiency. The pollutants transported by stormwater represent a significant environmental threat, comparable to leachate. Careful design, monitoring and maintenance of stormwater run-off drainage systems and infiltration elements are needed if infiltration is to be used as an on-site treatment strategy.
A New Tool for Effective and Efficient Project Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willett, Jesse A
2011-12-01
Organizations routinely handle thousands of projects per year, and it is difficult to manage all these projects concurrently. Too often, projects do not get the attention they need when they need it. Management inattention can lead to late projects or projects with less than desirable content and/or deliverables. This paper discusses the application of Visual Project Management (VPM) as a method to track and manage projects. The VPM approach proved to be a powerful management tool without the overhead and restrictions of traditional management methods.
Management in English Language Teaching.
ERIC Educational Resources Information Center
White, Ron; And Others
Teachers making the transition from the classroom to management can find many guides on management, but none on management in English language teaching (ELT) schools in the United Kingdom and elsewhere in the world. This book offers that guidance to new managers and administrators interested in running an effective teaching organization. Because…
The role of reservoir storage in large-scale surface water availability analysis for Europe
NASA Astrophysics Data System (ADS)
Garrote, L. M.; Granados, A.; Martin-Carrasco, F.; Iglesias, A.
2017-12-01
A regional assessment of current and future water availability in Europe is presented in this study. The assessment was made using the Water Availability and Adaptation Policy Analysis (WAAPA) model. The model was built on the river network derived from the Hydro1K digital elevation maps, including all major river basins of Europe. Reservoir storage volume was taken from the World Register of Dams of ICOLD, including all dams with storage capacity over 5 hm3. Potential Water Availability is defined as the maximum amount of water that could be supplied at a certain point of the river network to satisfy a regular demand under pre-specified reliability requirements. Water availability is the combined result of hydrological processes, which determine streamflow in natural conditions, and human intervention, which determines the available hydraulic infrastructure to manage water and establishes water supply conditions through operating rules. The WAAPA algorithm estimates the maximum demand that can be supplied at every node of the river network accounting for the regulation capacity of reservoirs under different management scenarios. The model was run for a set of hydrologic scenarios taken from the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP), where the PCRGLOBWB hydrological model was forced with results from five global climate models. Model results allow the estimation of potential water stress by comparing water availability to projections of water abstractions along the river network under different management alternatives. The set of sensitivity analyses performed showed the effect of policy alternatives on water availability and highlighted the large uncertainties linked to hydrological and anthropological processes.
NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (AMDAHL VERSION)
NASA Technical Reports Server (NTRS)
Rogers, J. E.
1994-01-01
The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).
NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Rogers, J. E.
1994-01-01
The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).
Bussery, Justin; Denis, Leslie-Alexandre; Guillon, Benjamin; Liu, Pengfeï; Marchetti, Gino; Rahal, Ghita
2018-04-01
We describe the genesis, design and evolution of a computing platform designed and built to improve the success rate of biomedical translational research. The eTRIKS project platform was developed with the aim of building a platform that can securely host heterogeneous types of data and provide an optimal environment to run tranSMART analytical applications. Many types of data can now be hosted, including multi-OMICS data, preclinical laboratory data and clinical information, including longitudinal data sets. During the last two years, the platform has matured into a robust translational research knowledge management system that is able to host other data mining applications and support the development of new analytical tools. Copyright © 2018 Elsevier Ltd. All rights reserved.
Probabilistic load simulation: Code development status
NASA Astrophysics Data System (ADS)
Newell, J. F.; Ho, H.
1991-05-01
The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.
NASA Technical Reports Server (NTRS)
2008-01-01
[figure removed for brevity, see original site] Click on the image for the animation This video shows the propulsion system on an engineering model of NASA's Phoenix Mars Lander being successfully tested. Instead of fuel, water is run through the propulsion system to make sure that the spacecraft holds up to vibrations caused by pressure oscillations. The test was performed very early in the development of the mission, in 2005, at Lockheed Martin Space Systems, Denver. Early testing was possible because Phoenix's main structure was already in place from the 2001 Mars Surveyor program. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.2014-01-21
CAPE CANAVERAL, Fla. – Technicians monitor the progress as a crane lowers the Project Morpheus prototype for positioning on a launch pad at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. The prototype lander is being prepared for its fourth free flight test at Kennedy. Morpheus will launch from the ground over a flame trench and then descend and land on a dedicated pad inside the autonomous landing and hazard avoidance technology, or ALHAT, hazard field. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://www.nasa.gov/centers/johnson/exploration/morpheus. Photo credit: NASA/Cory Huston
2014-01-21
CAPE CANAVERAL, Fla. – Technicians and engineers monitor the progress as the Project Morpheus prototype lander is lifted by crane for positioning on a launch pad at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. The prototype lander is being prepared for its fourth free flight test at Kennedy. Morpheus will launch from the ground over a flame trench and then descend and land on a dedicated pad inside the autonomous landing and hazard avoidance technology, or ALHAT, hazard field. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://www.nasa.gov/centers/johnson/exploration/morpheus. Photo credit: NASA/Cory Huston
2014-01-21
CAPE CANAVERAL, Fla. – Technicians monitor the progress as the Project Morpheus prototype lander is lifted by crane for positioning on a launch pad at the north end of the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida. The prototype lander is being prepared for its fourth free flight test at Kennedy. Morpheus will launch from the ground over a flame trench and then descend and land on a dedicated pad inside the autonomous landing and hazard avoidance technology, or ALHAT, hazard field. Project Morpheus tests NASA’s ALHAT and an engine that runs on liquid oxygen and methane, or green propellants, into a fully-operational lander that could deliver cargo to other planetary surfaces. The landing facility provides the lander with the kind of field necessary for realistic testing, complete with rocks, craters and hazards to avoid. Morpheus’ ALHAT payload allows it to navigate to clear landing sites amidst rocks, craters and other hazards during its descent. Project Morpheus is being managed under the Advanced Exploration Systems, or AES, Division in NASA’s Human Exploration and Operations Mission Directorate. The efforts in AES pioneer new approaches for rapidly developing prototype systems, demonstrating key capabilities and validating operational concepts for future human missions beyond Earth orbit. For more information on Project Morpheus, visit http://www.nasa.gov/centers/johnson/exploration/morpheus. Photo credit: NASA/Cory Huston
The new Waste Law: Challenging opportunity for future landfill operation in Indonesia.
Meidiana, Christia; Gamse, Thomas
2011-01-01
The Waste Law No. 18/2008 Article 22 and 44 require the local governments to run environmentally sound landfill. Due to the widespread poor quality of waste management in Indonesia, this study aimed to identify the current situation by evaluating three selected landfills based on the ideal conditions of landfill practices, which are used to appraise the capability of local governments to adapt to the law. The results indicated that the local governments have problems of insufficient budget, inadequate equipment, uncollected waste and unplanned future landfill locations. All of the selected landfills were partially controlled landfills with open dumping practices predominating. In such inferior conditions the implementation of sanitary landfill is not necessarily appropriate. The controlled landfill is a more appropriate solution as it offers lower investment and operational costs, makes the selection of a new landfill site unnecessary and can operate with a minimum standard of infrastructure and equipment. The sustainability of future landfill capacity can be maintained by utilizing the old landfill as a profit-oriented landfill by implementing a landfill gas management or a clean development mechanism project. A collection fee system using the pay-as-you-throw principle could increase the waste income thereby financing municipal solid waste management.
Coetzee, Maureen; Dippenaar, Ansie; Frean, John; Hunt, Richard H
2017-06-30
This article describes the clinical progression of symptoms over a period of 5 days of a bite inflicted by a Philodromus sp. spider. Commonly known as 'running spiders', these are not considered to be harmful to humans. This report, however, is the first description of an actual bite by a member of this group of spiders showing cytotoxic envenomation. Management of the bites should be as recommended for other cytotoxic spider bites.
An Ecosystem Service Evaluation Tool to Support Ridge-to-Reef Management and Conservation in Hawaii
NASA Astrophysics Data System (ADS)
Oleson, K.; Callender, T.; Delevaux, J. M. S.; Falinski, K. A.; Htun, H.; Jin, G.
2014-12-01
Faced with increasing anthropogenic stressors and diverse stakeholders, local managers are adopting a ridge-to-reef and multi-objective management approach to restore declining coral reef health state. An ecosystem services framework, which integrates ecological indicators and stakeholder values, can foster more applied and integrated research, data collection, and modeling, and thus better inform the decision-making process and realize decision outcomes grounded in stakeholders' values. Here, we describe a research program that (i) leverages remotely sensed and empirical data to build an ecosystem services-based decision-support tool geared towards ridge-to-reef management; and (ii) applies it as part of a structured, value-based decision-making process to inform management in west Maui, a NOAA coral reef conservation priority site. The tool links terrestrial and marine biophysical models in a spatially explicit manner to quantify and map changes in ecosystem services delivery resulting from management actions, projected climate change impacts, and adaptive responses. We couple model outputs with localized valuation studies to translate ecosystem service outcomes into benefits and their associated socio-cultural and/or economic values. Managers can use this tool to run scenarios during their deliberations to evaluate trade-offs, cost-effectiveness, and equity implications of proposed policies. Ultimately, this research program aims at improving the effectiveness, efficiency, and equity outcomes of ecosystem-based management. This presentation will describe our approach, summarize initial results from the terrestrial modeling and economic valuations for west Maui, and highlight how this decision support tool benefits managers in west Maui.
Feasibility of Using Video Cameras for Automated Enforcement on Red-Light Running and Managed Lanes.
DOT National Transportation Integrated Search
2009-12-01
The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and high occupancy vehicle (HOV) occupancy requirement using video cameras in Nev...
ERIC Educational Resources Information Center
Jones, Makeba; Yonezawa, Susan
2009-01-01
For the past two years, the Center for Research on Educational Equity, Assessment and Teaching Excellence at the University of California, San Diego, has designed and run Student-Created Research projects in eight racially diverse urban and low-income San Diego high schools. Students undertake research projects intended to guide school improvement…
Project Management for International Development.
ERIC Educational Resources Information Center
Axelrod, Valija M.; Magisos, Joel H.
A project developed a content model for international project management training. It also compiled a bibliography of project management references, identified specific project management training needs based upon a survey of international sponsors and contractor personnel, and documented the training needs of international project managers. Data…
Harper, S L; Edge, V L; Cunsolo Willox, A
2012-03-01
Global climate change and its impact on public health exemplify the challenge of managing complexity and uncertainty in health research. The Canadian North is currently experiencing dramatic shifts in climate, resulting in environmental changes which impact Inuit livelihoods, cultural practices, and health. For researchers investigating potential climate change impacts on Inuit health, it has become clear that comprehensive and meaningful research outcomes depend on taking a systemic and transdisciplinary approach that engages local citizens in project design, data collection, and analysis. While it is increasingly recognised that using approaches that embrace complexity is a necessity in public health, mobilizing such approaches from theory into practice can be challenging. In 2009, the Rigolet Inuit Community Government in Rigolet, Nunatsiavut, Canada partnered with a transdisciplinary team of researchers, health practitioners, and community storytelling facilitators to create the Changing Climate, Changing Health, Changing Stories project, aimed at developing a multi-media participatory, community-run methodological strategy to gather locally appropriate and meaningful data to explore climate-health relationships. The goal of this profile paper is to describe how an EcoHealth approach guided by principles of transdisciplinarity, community participation, and social equity was used to plan and implement this climate-health research project. An overview of the project, including project development, research methods, project outcomes to date, and challenges encountered, is presented. Though introduced in this one case study, the processes, methods, and lessons learned are broadly applicable to researchers and communities interested in implementing EcoHealth approaches in community-based research.
Weidenhammer, Wolfgang; Lewith, George; Falkenberg, Torkel; Fønnebø, Vinjar; Johannessen, Helle; Reiter, Bettina; Uehleke, Bernhard; von Ammon, Klaus; Baumhöfener, Franziska; Brinkhaus, Benno
2011-01-01
The status of complementary and alternative medicine (CAM) within the EU needs clarification. The definition and terminology of CAM is heterogeneous. The therapies, legal status, regulations and approaches used vary from country to country but there is widespread use by EU citizens. A coordination project funded by the EU has been launched to improve the knowledge about CAM in Europe. The project aims to evaluate the conditions surrounding CAM use and provision in Europe and to develop a roadmap for European CAM research. Specific objectives are to establish an EU network involving centres of research excellence for collaborative projects, to develop consensus-based terminology to describe CAM interventions, to create a knowledge base that facilitates the understanding of patient demand for CAM and its prevalence, to review the current legal status and policies governing CAM provision, and to explore the needs and attitudes of EU citizens with respect to CAM. Based on this information a roadmap will be created that will enable sustainable and prioritised future European research in CAM. CAMbrella encompasses 16 academic research groups from 12 European countries and will run for 36 months starting from January 2010. The project will be delivered in 9 work packages coordinated by a Management Board and directed by a Scientific Steering Committee with support of an Advisory Board. The outcomes generated will be disseminated through the project's website, peer review open access publications and a final conference, with emphasis on current and future EU policies, addressing different target audiences. Copyright © 2011 S. Karger AG, Basel.