Laboratory Computing Resource Center
Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low
Shared-resource computing for small research labs.
Ackerman, M J
1982-04-01
A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.
ANL site response for the DOE FY1994 information resources management long-range plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boxberger, L.M.
1992-03-01
Argonne National Laboratory`s ANL Site Response for the DOE FY1994 Information Resources Management (IRM) Long-Range Plan (ANL/TM 500) is one of many contributions to the DOE information resources management long-range planning process and, as such, is an integral part of the DOE policy and program planning system. The Laboratory has constructed this response according to instructions in a Call issued in September 1991 by the DOE Office of IRM Policy, Plans and Oversight. As one of a continuing series, this Site Response is an update and extension of the Laboratory`s previous submissions. The response contains both narrative and tabular material.more » It covers an eight-year period consisting of the base year (FY1991), the current year (FY1992), the budget year (FY1993), the plan year (FY1994), and the out years (FY1995-FY1998). This Site Response was compiled by Argonne National Laboratory`s Computing and Telecommunications Division (CTD), which has the responsibility to provide leadership in optimizing computing and information services and disseminating computer-related technologies throughout the Laboratory. The Site Response consists of 5 parts: (1) a site overview, describes the ANL mission, overall organization structure, the strategic approach to meet information resource needs, the planning process, major issues and points of contact. (2) a software plan for DOE contractors, Part 2B, ``Software Plan FMS plan for DOE organizations, (3) computing resources telecommunications, (4) telecommunications, (5) printing and publishing.« less
A computational model of the human hand 93-ERI-053
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollerbach, K.; Axelrod, T.
1996-03-01
The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack ofmore » biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.« less
ANL site response for the DOE FY1994 information resources management long-range plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boxberger, L.M.
1992-03-01
Argonne National Laboratory's ANL Site Response for the DOE FY1994 Information Resources Management (IRM) Long-Range Plan (ANL/TM 500) is one of many contributions to the DOE information resources management long-range planning process and, as such, is an integral part of the DOE policy and program planning system. The Laboratory has constructed this response according to instructions in a Call issued in September 1991 by the DOE Office of IRM Policy, Plans and Oversight. As one of a continuing series, this Site Response is an update and extension of the Laboratory's previous submissions. The response contains both narrative and tabular material.more » It covers an eight-year period consisting of the base year (FY1991), the current year (FY1992), the budget year (FY1993), the plan year (FY1994), and the out years (FY1995-FY1998). This Site Response was compiled by Argonne National Laboratory's Computing and Telecommunications Division (CTD), which has the responsibility to provide leadership in optimizing computing and information services and disseminating computer-related technologies throughout the Laboratory. The Site Response consists of 5 parts: (1) a site overview, describes the ANL mission, overall organization structure, the strategic approach to meet information resource needs, the planning process, major issues and points of contact. (2) a software plan for DOE contractors, Part 2B, Software Plan FMS plan for DOE organizations, (3) computing resources telecommunications, (4) telecommunications, (5) printing and publishing.« less
ERIC Educational Resources Information Center
Richardson, Jeffrey J.; Adamo-Villani, Nicoletta
2010-01-01
Laboratory instruction is a major component of the engineering and technology undergraduate curricula. Traditional laboratory instruction is hampered by several factors including limited access to resources by students and high laboratory maintenance cost. A photorealistic 3D computer-simulated laboratory for undergraduate instruction in…
Appropriate Use Policy | High-Performance Computing | NREL
users of the National Renewable Energy Laboratory (NREL) High Performance Computing (HPC) resources government agency, National Laboratory, University, or private entity, the intellectual property terms (if issued a multifactor token which may be a physical token or a virtual token used with one-time password
Utilization of Educationally Oriented Microcomputer Based Laboratories
ERIC Educational Resources Information Center
Fitzpatrick, Michael J.; Howard, James A.
1977-01-01
Describes one approach to supplying engineering and computer science educators with an economical portable digital systems laboratory centered around microprocessors. Expansion of the microcomputer based laboratory concept to include Learning Resource Aided Instruction (LRAI) systems is explored. (Author)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael Pernice
2010-09-01
INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clouse, C. J.; Edwards, M. J.; McCoy, M. G.
2015-07-07
Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.
ANL statement of site strategy for computing workstations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenske, K.R.; Boxberger, L.M.; Amiot, L.W.
1991-11-01
This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is tomore » develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.« less
Mathematics and Computer Science | Argonne National Laboratory
Genomics and Systems Biology LCRCLaboratory Computing Resource Center MCSGMidwest Center for Structural Genomics NAISENorthwestern-Argonne Institute of Science & Engineering SBCStructural Biology Center
GeoBrain Computational Cyber-laboratory for Earth Science Studies
NASA Astrophysics Data System (ADS)
Deng, M.; di, L.
2009-12-01
Computational approaches (e.g., computer-based data visualization, analysis and modeling) are critical for conducting increasingly data-intensive Earth science (ES) studies to understand functions and changes of the Earth system. However, currently Earth scientists, educators, and students have met two major barriers that prevent them from being effectively using computational approaches in their learning, research and application activities. The two barriers are: 1) difficulties in finding, obtaining, and using multi-source ES data; and 2) lack of analytic functions and computing resources (e.g., analysis software, computing models, and high performance computing systems) to analyze the data. Taking advantages of recent advances in cyberinfrastructure, Web service, and geospatial interoperability technologies, GeoBrain, a project funded by NASA, has developed a prototype computational cyber-laboratory to effectively remove the two barriers. The cyber-laboratory makes ES data and computational resources at large organizations in distributed locations available to and easily usable by the Earth science community through 1) enabling seamless discovery, access and retrieval of distributed data, 2) federating and enhancing data discovery with a catalogue federation service and a semantically-augmented catalogue service, 3) customizing data access and retrieval at user request with interoperable, personalized, and on-demand data access and services, 4) automating or semi-automating multi-source geospatial data integration, 5) developing a large number of analytic functions as value-added, interoperable, and dynamically chainable geospatial Web services and deploying them in high-performance computing facilities, 6) enabling the online geospatial process modeling and execution, and 7) building a user-friendly extensible web portal for users to access the cyber-laboratory resources. Users can interactively discover the needed data and perform on-demand data analysis and modeling through the web portal. The GeoBrain cyber-laboratory provides solutions to meet common needs of ES research and education, such as, distributed data access and analysis services, easy access to and use of ES data, and enhanced geoprocessing and geospatial modeling capability. It greatly facilitates ES research, education, and applications. The development of the cyber-laboratory provides insights, lessons-learned, and technology readiness to build more capable computing infrastructure for ES studies, which can meet wide-range needs of current and future generations of scientists, researchers, educators, and students for their formal or informal educational training, research projects, career development, and lifelong learning.
ERIC Educational Resources Information Center
Texas State Technical Coll. System, Waco.
This package consists of course syllabi, an instructor's handbook, and student laboratory manual for a 1-year vocational training program to prepare students for entry-level positions as advanced computer numerical control (CNC) and computer-assisted manufacturing (CAM) technicians.. The program was developed through a modification of the DACUM…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikolic, R J
This month's issue has the following articles: (1) Dawn of a New Era of Scientific Discovery - Commentary by Edward I. Moses; (2) At the Frontiers of Fundamental Science Research - Collaborators from national laboratories, universities, and international organizations are using the National Ignition Facility to probe key fundamental science questions; (3) Livermore Responds to Crisis in Post-Earthquake Japan - More than 70 Laboratory scientists provided round-the-clock expertise in radionuclide analysis and atmospheric dispersion modeling as part of the nation's support to Japan following the March 2011 earthquake and nuclear accident; (4) A Comprehensive Resource for Modeling, Simulation, and Experimentsmore » - A new Web-based resource called MIDAS is a central repository for material properties, experimental data, and computer models; and (5) Finding Data Needles in Gigabit Haystacks - Livermore computer scientists have developed a novel computer architecture based on 'persistent' memory to ease data-intensive computations.« less
Atmospheric transmission computer program CP
NASA Technical Reports Server (NTRS)
Pitts, D. E.; Barnett, T. L.; Korb, C. L.; Hanby, W.; Dillinger, A. E.
1974-01-01
A computer program is described which allows for calculation of the effects of carbon dioxide, water vapor, methane, ozone, carbon monoxide, and nitrous oxide on earth resources remote sensing techniques. A flow chart of the program and operating instructions are provided. Comparisons are made between the atmospheric transmission obtained from laboratory and spacecraft spectrometer data and that obtained from a computer prediction using a model atmosphere and radiosonde data. Limitations of the model atmosphere are discussed. The computer program listings, input card formats, and sample runs for both radiosonde data and laboratory data are included.
ERIC Educational Resources Information Center
Texas State Technical Coll. System, Waco.
This package consists of course syllabi, an instructor's handbook, and a student laboratory manual for a 2-year vocational training program to prepare students for entry-level employment in computer-aided drafting and design in the machine tool industry. The program was developed through a modification of the DACUM (Developing a Curriculum)…
Incorporating computational resources in a cancer research program
Woods, Nicholas T.; Jhuraney, Ankita; Monteiro, Alvaro N.A.
2015-01-01
Recent technological advances have transformed cancer genetics research. These advances have served as the basis for the generation of a number of richly annotated datasets relevant to the cancer geneticist. In addition, many of these technologies are now within reach of smaller laboratories to answer specific biological questions. Thus, one of the most pressing issues facing an experimental cancer biology research program in genetics is incorporating data from multiple sources to annotate, visualize, and analyze the system under study. Fortunately, there are several computational resources to aid in this process. However, a significant effort is required to adapt a molecular biology-based research program to take advantage of these datasets. Here, we discuss the lessons learned in our laboratory and share several recommendations to make this transition effectively. This article is not meant to be a comprehensive evaluation of all the available resources, but rather highlight those that we have incorporated into our laboratory and how to choose the most appropriate ones for your research program. PMID:25324189
New design for interfacing computers to the Octopus network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sloan, L.J.
1977-03-14
The Lawrence Livermore Laboratory has several large-scale computers which are connected to the Octopus network. Several difficulties arise in providing adequate resources along with reliable performance. To alleviate some of these problems a new method of bringing large computers into the Octopus environment is proposed.
Mixing HTC and HPC Workloads with HTCondor and Slurm
NASA Astrophysics Data System (ADS)
Hollowell, C.; Barnett, J.; Caramarcu, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.
2017-10-01
Traditionally, the RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has only maintained High Throughput Computing (HTC) resources for our HEP/NP user community. We’ve been using HTCondor as our batch system for many years, as this software is particularly well suited for managing HTC processor farm resources. Recently, the RACF has also begun to design/administrate some High Performance Computing (HPC) systems for a multidisciplinary user community at BNL. In this paper, we’ll discuss our experiences using HTCondor and Slurm in an HPC context, and our facility’s attempts to allow our HTC and HPC processing farms/clusters to make opportunistic use of each other’s computing resources.
Lawrence Berkeley Laboratory, Institutional Plan FY 1994--1999
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-09-01
The Institutional Plan provides an overview of the Lawrence Berkeley Laboratory mission, strategic plan, scientific initiatives, research programs, environment and safety program plans, educational and technology transfer efforts, human resources, and facilities needs. For FY 1994-1999 the Institutional Plan reflects significant revisions based on the Laboratory`s strategic planning process. The Strategic Plan section identifies long-range conditions that will influence the Laboratory, as well as potential research trends and management implications. The Initiatives section identifies potential new research programs that represent major long-term opportunities for the Laboratory, and the resources required for their implementation. The Scientific and Technical Programs section summarizesmore » current programs and potential changes in research program activity. The Environment, Safety, and Health section describes the management systems and programs underway at the Laboratory to protect the environment, the public, and the employees. The Technology Transfer and Education programs section describes current and planned programs to enhance the nation`s scientific literacy and human infrastructure and to improve economic competitiveness. The Human Resources section identifies LBL staff diversity and development program. The section on Site and Facilities discusses resources required to sustain and improve the physical plant and its equipment. The new section on Information Resources reflects the importance of computing and communication resources to the Laboratory. The Resource Projections are estimates of required budgetary authority for the Laboratory`s ongoing research programs. The Institutional Plan is a management report for integration with the Department of Energy`s strategic planning activities, developed through an annual planning process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Computing and Communications (C) Division is responsible for the Laboratory`s Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called ``Grand Challenge`` problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less
Jaschob, Daniel; Riffle, Michael
2012-07-30
Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.
Globus Quick Start Guide. Globus Software Version 1.1
NASA Technical Reports Server (NTRS)
1999-01-01
The Globus Project is a community effort, led by Argonne National Laboratory and the University of Southern California's Information Sciences Institute. Globus is developing the basic software infrastructure for computations that integrate geographically distributed computational and information resources.
Final Report National Laboratory Professional Development Workshop for Underrepresented Participants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Valerie
The 2013 CMD-IT National Laboratories Professional Development Workshop for Underrepresented Participants (CMD-IT NLPDev 2013) was held at the Oak Ridge National Laboratory campus in Oak Ridge, TN. from June 13 - 14, 2013. Sponsored by the Department of Energy (DOE) Advanced Scientific Computing Research Program, the primary goal of these workshops is to provide information about career opportunities in computational science at the various national laboratories and to mentor the underrepresented participants through community building and expert presentations focused on career success. This second annual workshop offered sessions to facilitate career advancement and, in particular, the strategies and resources neededmore » to be successful at the national laboratories.« less
Cyber-workstation for computational neuroscience.
Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C
2010-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.
Cyber-Workstation for Computational Neuroscience
DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.
2009-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436
Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
2015-01-01
Background Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. Results We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. Conclusions This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation. PMID:26501966
Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
2015-01-01
Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation.
JSC earth resources data analysis capabilities available to EOD revision B
NASA Technical Reports Server (NTRS)
1974-01-01
A list and summary description of all Johnson Space Center electronic laboratory and photographic laboratory capabilities available to earth resources division personnel for processing earth resources data are provided. The electronic capabilities pertain to those facilities and systems that use electronic and/or photographic products as output. The photographic capabilities pertain to equipment that uses photographic images as input and electronic and/or table summarizes processing steps. A general hardware description is presented for each of the data processing systems, and the titles of computer programs are used to identify the capabilities and data flow.
The Technology Information Environment with Industry{trademark} system description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detry, R.; Machin, G.
The Technology Information Environment with Industry (TIE-In{trademark}) provides users with controlled access to distributed laboratory resources that are packaged in intelligent user interfaces. These interfaces help users access resources without requiring the user to have technical or computer expertise. TIE-In utilizes existing, proven technologies such as the Kerberos authentication system, X-Windows, and UNIX sockets. A Front End System (FES) authenticates users and allows them to register for resources and subsequently access them. The FES also stores status and accounting information, and provides an automated method for the resource owners to recover costs from users. The resources available through TIE-In aremore » typically laboratory-developed applications that are used to help design, analyze, and test components in the nation`s nuclear stockpile. Many of these applications can also be used by US companies for non-weapons-related work. TIE-In allows these industry partners to obtain laboratory-developed technical solutions without requiring them to duplicate the technical resources (people, hardware, and software) at Sandia.« less
2012-01-01
Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423
Evaluation of hydrothermal resources of North Dakota. Phase II. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, K.L.; Howell, F.L.; Winczewski, L.M.
1981-06-01
This evaluation of the hydrothermal resources of North Dakota is based on existing data on file with the North Dakota Geological Survey (NDGS) and other state and federal agencies, and field and laboratory studies conducted. The principal sources of data used during the Phase II study were WELLFILE, the computer library of oil and gas well data developed during the Phase I study, and WATERCAT, a computer library system of water well data assembled during the Phase II study. A field survey of the shallow geothermal gradients present in selected groundwater observation holes was conducted. Laboratory determinations of the thermalmore » conductivity of core samples is being done to facilitate heat-flow calculations on those hole-of-convenience cased.« less
Evaluation of hydrothermal resources of North Dakota. Phase III final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, K.L.; Howell, F.L.; Wartman, B.L.
1982-08-01
The hydrothermal resources of North Dakota were evaluated. This evaluation was based on existing data on file with the North Dakota Geological Survey (NDGS) and other state and federal agencies, and field and laboratory studies conducted. The principal sources of data used during the study were WELLFILE, the computer library of oil and gas well data developed during the Phase I study, and WATERCAT, a computer library system of water well data assembled during the Phase II study. A field survey of the shallow geothermal gradients present in selected groundwater observation holes was conducted. Laboratory determinations of the thermal conductivitymore » of core samples were done to facilitate heat-flow calculations on those holes-of-convenience cased.« less
ERIC Educational Resources Information Center
Rodrigues, Ricardo P.; Andrade, Saulo F.; Mantoani, Susimaire P.; Eifler-Lima, Vera L.; Silva, Vinicius B.; Kawano, Daniel F.
2015-01-01
Advances in, and dissemination of, computer technologies in the field of drug research now enable the use of molecular modeling tools to teach important concepts of drug design to chemistry and pharmacy students. A series of computer laboratories is described to introduce undergraduate students to commonly adopted "in silico" drug design…
ERIC Educational Resources Information Center
Caminero, Agustín C.; Ros, Salvador; Hernández, Roberto; Robles-Gómez, Antonio; Tobarra, Llanos; Tolbaños Granjo, Pedro J.
2016-01-01
The use of practical laboratories is a key in engineering education in order to provide our students with the resources needed to acquire practical skills. This is specially true in the case of distance education, where no physical interactions between lecturers and students take place, so virtual or remote laboratories must be used. UNED has…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Computing and Communications (C) Division is responsible for the Laboratory's Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called Grand Challenge'' problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less
"TIS": An Intelligent Gateway Computer for Information and Modeling Networks. Overview.
ERIC Educational Resources Information Center
Hampel, Viktor E.; And Others
TIS (Technology Information System) is being used at the Lawrence Livermore National Laboratory (LLNL) to develop software for Intelligent Gateway Computers (IGC) suitable for the prototyping of advanced, integrated information networks. Dedicated to information management, TIS leads the user to available information resources, on TIS or…
SOCR: Statistics Online Computational Resource
ERIC Educational Resources Information Center
Dinov, Ivo D.
2006-01-01
The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an…
NASA Tech Briefs, November/December 1986, Special Edition
NASA Technical Reports Server (NTRS)
1986-01-01
Topics: Computing: The View from NASA Headquarters; Earth Resources Laboratory Applications Software: Versatile Tool for Data Analysis; The Hypercube: Cost-Effective Supercomputing; Artificial Intelligence: Rendezvous with NASA; NASA's Ada Connection; COSMIC: NASA's Software Treasurehouse; Golden Oldies: Tried and True NASA Software; Computer Technical Briefs; NASA TU Services; Digital Fly-by-Wire.
Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.
2009-01-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
Strategies for combining physics videos and virtual laboratories in the training of physics teachers
NASA Astrophysics Data System (ADS)
Dickman, Adriana; Vertchenko, Lev; Martins, Maria Inés
2007-03-01
Among the multimedia resources used in physics education, the most prominent are virtual laboratories and videos. On one hand, computer simulations and applets have very attractive graphic interfaces, showing an incredible amount of detail and movement. On the other hand, videos, offer the possibility of displaying high quality images, and are becoming more feasible with the increasing availability of digital resources. We believe it is important to discuss, throughout the teacher training program, both the functionality of information and communication technology (ICT) in physics education and, the varied applications of these resources. In our work we suggest the introduction of ICT resources in a sequence integrating these important tools in the teacher training program, as opposed to the traditional approach, in which virtual laboratories and videos are introduced separately. In this perspective, when we introduce and utilize virtual laboratory techniques we also provide for its use in videos, taking advantage of graphic interfaces. Thus the students in our program learn to use instructional software in the production of videos for classroom use.
NASA Tech Briefs, May 1995. Volume 19, No. 5
NASA Technical Reports Server (NTRS)
1995-01-01
This issue features an resource report on Jet Propulsion Laboratory and a special focus on advanced composites and plastics. It also contains articles on electronic components and circuits, electronic systems, physical sciences, computer programs, mechanics, machinery, manufacturing and fabrication, mathematics and information sciences, and life sciences. This issue also contains a supplement on federal laboratory test and measurements.
Eppig, Janan T
2017-07-01
The Mouse Genome Informatics (MGI) Resource supports basic, translational, and computational research by providing high-quality, integrated data on the genetics, genomics, and biology of the laboratory mouse. MGI serves a strategic role for the scientific community in facilitating biomedical, experimental, and computational studies investigating the genetics and processes of diseases and enabling the development and testing of new disease models and therapeutic interventions. This review describes the nexus of the body of growing genetic and biological data and the advances in computer technology in the late 1980s, including the World Wide Web, that together launched the beginnings of MGI. MGI develops and maintains a gold-standard resource that reflects the current state of knowledge, provides semantic and contextual data integration that fosters hypothesis testing, continually develops new and improved tools for searching and analysis, and partners with the scientific community to assure research data needs are met. Here we describe one slice of MGI relating to the development of community-wide large-scale mutagenesis and phenotyping projects and introduce ways to access and use these MGI data. References and links to additional MGI aspects are provided. © The Author 2017. Published by Oxford University Press.
Eppig, Janan T.
2017-01-01
Abstract The Mouse Genome Informatics (MGI) Resource supports basic, translational, and computational research by providing high-quality, integrated data on the genetics, genomics, and biology of the laboratory mouse. MGI serves a strategic role for the scientific community in facilitating biomedical, experimental, and computational studies investigating the genetics and processes of diseases and enabling the development and testing of new disease models and therapeutic interventions. This review describes the nexus of the body of growing genetic and biological data and the advances in computer technology in the late 1980s, including the World Wide Web, that together launched the beginnings of MGI. MGI develops and maintains a gold-standard resource that reflects the current state of knowledge, provides semantic and contextual data integration that fosters hypothesis testing, continually develops new and improved tools for searching and analysis, and partners with the scientific community to assure research data needs are met. Here we describe one slice of MGI relating to the development of community-wide large-scale mutagenesis and phenotyping projects and introduce ways to access and use these MGI data. References and links to additional MGI aspects are provided. PMID:28838066
Merging the Machines of Modern Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Laura; Collins, Jim
Two recent projects have harnessed supercomputing resources at the US Department of Energy’s Argonne National Laboratory in a novel way to support major fusion science and particle collider experiments. Using leadership computing resources, one team ran fine-grid analysis of real-time data to make near-real-time adjustments to an ongoing experiment, while a second team is working to integrate Argonne’s supercomputers into the Large Hadron Collider/ATLAS workflow. Together these efforts represent a new paradigm of the high-performance computing center as a partner in experimental science.
Computer Software Management and Information Center
NASA Technical Reports Server (NTRS)
1983-01-01
Computer programs for passive anti-roll tank, earth resources laboratory applications, the NIMBUS-7 coastal zone color scanner derived products, transportable applications executive, plastic and failure analysis of composites, velocity gradient method for calculating velocities in an axisymmetric annular duct, an integrated procurement management system, data I/O PRON for the Motorola exorcisor, aerodynamic shock-layer shape, kinematic modeling, hardware library for a graphics computer, and a file archival system are documented.
The use of computers to teach human anatomy and physiology to allied health and nursing students
NASA Astrophysics Data System (ADS)
Bergeron, Valerie J.
Educational institutions are under tremendous pressure to adopt the newest technologies in order to prepare their students to meet the challenges of the twenty-first century. For the last twenty years huge amounts of money have been spent on computers, printers, software, multimedia projection equipment, and so forth. A reasonable question is, "Has it worked?" Has this infusion of resources, financial as well as human, resulted in improved learning? Are the students meeting the intended learning goals? Any attempt to develop answers to these questions should include examining the intended goals and exploring the effects of the changes on students and faculty. This project investigated the impact of a specific application of a computer program in a community college setting on students' attitudes and understanding of human anatomy and physiology. In this investigation two sites of the same community college with seemingly similar students populations, seven miles apart, used different laboratory activities to teach human anatomy and physiology. At one site nursing students were taught using traditional dissections and laboratory activities; at the other site two of the dissections, specifically cat and sheep pluck, were replaced with the A.D.A.M.RTM (Animated Dissection of Anatomy for Medicine) computer program. Analysis of the attitude data indicated that students at both sites were extremely positive about their laboratory experiences. Analysis of the content data indicated a statistically significant difference in performance between the two sites in two of the eight content areas that were studied. For both topics the students using the computer program scored higher. A detailed analysis of the surveys, interviews with faculty and students, examination of laboratory materials, and observations of laboratory facilities in both sites, and cost-benefit analysis led to the development of seven recommendations. The recommendations call for action at the level of the institution requiring investment in additional resources, and at the level of the faculty requiring a commitment to exploration and reflective practice.
NASA Astrophysics Data System (ADS)
Lehman, Donald Clifford
Today's medical laboratories are dealing with cost containment health care policies and unfilled laboratory positions. Because there may be fewer experienced clinical laboratory scientists, students graduating from clinical laboratory science (CLS) programs are expected by their employers to perform accurately in entry-level positions with minimal training. Information in the CLS field is increasing at a dramatic rate, and instructors are expected to teach more content in the same amount of time with the same resources. With this increase in teaching obligations, instructors could use a tool to facilitate grading. The research question was, "Can computer-assisted assessment evaluate students in an accurate and time efficient way?" A computer program was developed to assess CLS students' ability to evaluate peripheral blood smears. Automated grading permits students to get results quicker and allows the laboratory instructor to devote less time to grading. This computer program could improve instruction by providing more time to students and instructors for other activities. To be valuable, the program should provide the same quality of grading as the instructor. These benefits must outweigh potential problems such as the time necessary to develop and maintain the program, monitoring of student progress by the instructor, and the financial cost of the computer software and hardware. In this study, surveys of students and an interview with the laboratory instructor were performed to provide a formative evaluation of the computer program. In addition, the grading accuracy of the computer program was examined. These results will be used to improve the program for use in future courses.
LINCS: Livermore's network architecture. [Octopus computing network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fletcher, J.G.
1982-01-01
Octopus, a local computing network that has been evolving at the Lawrence Livermore National Laboratory for over fifteen years, is currently undergoing a major revision. The primary purpose of the revision is to consolidate and redefine the variety of conventions and formats, which have grown up over the years, into a single standard family of protocols, the Livermore Interactive Network Communication Standard (LINCS). This standard treats the entire network as a single distributed operating system such that access to a computing resource is obtained in a single way, whether that resource is local (on the same computer as the accessingmore » process) or remote (on another computer). LINCS encompasses not only communication but also such issues as the relationship of customer to server processes and the structure, naming, and protection of resources. The discussion includes: an overview of the Livermore user community and computing hardware, the functions and structure of each of the seven layers of LINCS protocol, the reasons why we have designed our own protocols and why we are dissatisfied by the directions that current protocol standards are taking.« less
Computational Science at the Argonne Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Romero, Nichols
2014-03-01
The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.
A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.
Moretti, Loris; Sartori, Luca
2016-10-01
Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics (LQCD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Negele, John W.
Building on the success of two preceding generations of Scientific Discovery through Advanced Computing (SciDAC) projects, this grant supported the MIT component (P.I. John Negele) of a multi-institutional SciDAC-3 project that also included Brookhaven National Laboratory, the lead laboratory with P. I. Frithjof Karsch serving as Project Director, Thomas Jefferson National Accelerator Facility with P. I. David Richards serving as Co-director, University of Washington with P. I. Martin Savage, University of North Carolina with P. I. Rob Fowler, and College of William and Mary with P. I. Andreas Stathopoulos. Nationally, this multi-institutional project coordinated the software development effort that themore » nuclear physics lattice QCD community needs to ensure that lattice calculations can make optimal use of forthcoming leadership-class and dedicated hardware, including that at the national laboratories, and to exploit future computational resources in the Exascale era.« less
Coal and Open-pit surface mining impacts on American Lands (COAL)
NASA Astrophysics Data System (ADS)
Brown, T. A.; McGibbney, L. J.
2017-12-01
Mining is known to cause environmental degradation, but software tools to identify its impacts are lacking. However, remote sensing, spectral reflectance, and geographic data are readily available, and high-performance cloud computing resources exist for scientific research. Coal and Open-pit surface mining impacts on American Lands (COAL) provides a suite of algorithms and documentation to leverage these data and resources to identify evidence of mining and correlate it with environmental impacts over time.COAL was originally developed as a 2016 - 2017 senior capstone collaboration between scientists at the NASA Jet Propulsion Laboratory (JPL) and computer science students at Oregon State University (OSU). The COAL team implemented a free and open-source software library called "pycoal" in the Python programming language which facilitated a case study of the effects of coal mining on water resources. Evidence of acid mine drainage associated with an open-pit coal mine in New Mexico was derived by correlating imaging spectrometer data from the JPL Airborne Visible/InfraRed Imaging Spectrometer - Next Generation (AVIRIS-NG), spectral reflectance data published by the USGS Spectroscopy Laboratory in the USGS Digital Spectral Library 06, and GIS hydrography data published by the USGS National Geospatial Program in The National Map. This case study indicated that the spectral and geospatial algorithms developed by COAL can be used successfully to analyze the environmental impacts of mining activities.Continued development of COAL has been promoted by a Startup allocation award of high-performance computing resources from the Extreme Science and Engineering Discovery Environment (XSEDE). These resources allow the team to undertake further benchmarking, evaluation, and experimentation using multiple XSEDE resources. The opportunity to use computational infrastructure of this caliber will further enable the development of a science gateway to continue foundational COAL research.This work documents the original design and development of COAL and provides insight into continuing research efforts which have potential applications beyond the project to environmental data science and other fields.
CFD in design - A government perspective
NASA Technical Reports Server (NTRS)
Kutler, Paul; Gross, Anthony R.
1989-01-01
Some of the research programs involving the use of CFD in the aerodynamic design process at government laboratories around the United States are presented. Technology transfer issues and future directions in the discipline or CFD are addressed. The major challengers in the aerosciences as well as other disciplines that will require high-performance computing resources such as massively parallel computers are examined.
Publications of the Jet Propulsion Laboratory 1982
NASA Technical Reports Server (NTRS)
1983-01-01
A bibliography of articles concerning topics on the deep space network, data acquisition, telecommunication, and related aerospace studies is presented. A sample of the diverse subjects include, solar energy remote sensing, computer science, Earth resources, astronomy, and satellite communication.
Editorial comment on Malkin and Keane (2010).
Voigt, Herbert F; Krishnan, Shankar M
2010-07-01
Malkin and Keane (Med Biol Eng Comput, 2010) take an innovative approach to determine if unused, broken medical and laboratory equipment could be repaired by volunteers with limited resources. Their positive results led them to suggest that resource-poor countries might benefit from an on-the-job educational program for local high school graduates. The program would train biomedical technician assistants (BTAs) who would repair medical devices and instrumentation and return them to service. This is a program worth pursuing in resource-poor countries.
NASA Astrophysics Data System (ADS)
Knosp, B.; Neely, S.; Zimdars, P.; Mills, B.; Vance, N.
2007-12-01
The Microwave Limb Sounder (MLS) Science Computing Facility (SCF) stores over 50 terabytes of data, has over 240 computer processing hosts, and 64 users from around the world. These resources are spread over three primary geographical locations - the Jet Propulsion Laboratory (JPL), Raytheon RIS, and New Mexico Institute of Mining and Technology (NMT). A need for a grid network system was identified and defined to solve the problem of users competing for finite, and increasingly scarce, MLS SCF computing resources. Using Sun's Grid Engine software, a grid network was successfully created in a development environment that connected the JPL and Raytheon sites, established master and slave hosts, and demonstrated that transfer queues for jobs can work among multiple clusters in the same grid network. This poster will first describe MLS SCF resources and the lessons that were learned in the design and development phase of this project. It will then go on to discuss the test environment and plans for deployment by highlighting benchmarks and user experiences.
Advanced Technology System Scheduling Governance Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ang, Jim; Carnes, Brian; Hoang, Thuc
In the fall of 2005, the Advanced Simulation and Computing (ASC) Program appointed a team to formulate a governance model for allocating resources and scheduling the stockpile stewardship workload on ASC capability systems. This update to the original document takes into account the new technical challenges and roles for advanced technology (AT) systems and the new ASC Program workload categories that must be supported. The goal of this updated model is to effectively allocate and schedule AT computing resources among all three National Nuclear Security Administration (NNSA) laboratories for weapons deliverables that merit priority on this class of resource. Themore » process outlined below describes how proposed work can be evaluated and approved for resource allocations while preserving high effective utilization of the systems. This approach will provide the broadest possible benefit to the Stockpile Stewardship Program (SSP).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, Felicia Angelica; Waymire, Russell L.
2013-10-01
Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documentsmore » have also been provided to KHNP-CRI.« less
Computational analysis of Ebolavirus data: prospects, promises and challenges.
Michaelis, Martin; Rossman, Jeremy S; Wass, Mark N
2016-08-15
The ongoing Ebola virus (also known as Zaire ebolavirus, a member of the Ebolavirus family) outbreak in West Africa has so far resulted in >28000 confirmed cases compared with previous Ebolavirus outbreaks that affected a maximum of a few hundred individuals. Hence, Ebolaviruses impose a much greater threat than we may have expected (or hoped). An improved understanding of the virus biology is essential to develop therapeutic and preventive measures and to be better prepared for future outbreaks by members of the Ebolavirus family. Computational investigations can complement wet laboratory research for biosafety level 4 pathogens such as Ebolaviruses for which the wet experimental capacities are limited due to a small number of appropriate containment laboratories. During the current West Africa outbreak, sequence data from many Ebola virus genomes became available providing a rich resource for computational analysis. Here, we consider the studies that have already reported on the computational analysis of these data. A range of properties have been investigated including Ebolavirus evolution and pathogenicity, prediction of micro RNAs and identification of Ebolavirus specific signatures. However, the accuracy of the results remains to be confirmed by wet laboratory experiments. Therefore, communication and exchange between computational and wet laboratory researchers is necessary to make maximum use of computational analyses and to iteratively improve these approaches. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.
Studying the Earth's Environment from Space: Computer Laboratory Exercised and Instructor Resources
NASA Technical Reports Server (NTRS)
Smith, Elizabeth A.; Alfultis, Michael
1998-01-01
Studying the Earth's Environment From Space is a two-year project to develop a suite of CD-ROMs containing Earth System Science curriculum modules for introductory undergraduate science classes. Lecture notes, slides, and computer laboratory exercises, including actual satellite data and software, are being developed in close collaboration with Carla Evans of NASA GSFC Earth Sciences Directorate Scientific and Educational Endeavors (SEE) project. Smith and Alfultis are responsible for the Oceanography and Sea Ice Processes Modules. The GSFC SEE project is responsible for Ozone and Land Vegetation Modules. This document constitutes a report on the first year of activities of Smith and Alfultis' project.
Researchers Mine Information from Next-Generation Subsurface Flow Simulations
Gedenk, Eric D.
2015-12-01
A research team based at Virginia Tech University leveraged computing resources at the US Department of Energy's (DOE's) Oak Ridge National Laboratory to explore subsurface multiphase flow phenomena that can't be experimentally observed. Using the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility, the team took Micro-CT images of subsurface geologic systems and created two-phase flow simulations. The team's model development has implications for computational research pertaining to carbon sequestration, oil recovery, and contaminant transport.
Known structure, unknown function: An inquiry-based undergraduate biochemistry laboratory course.
Gray, Cynthia; Price, Carol W; Lee, Christopher T; Dewald, Alison H; Cline, Matthew A; McAnany, Charles E; Columbus, Linda; Mura, Cameron
2015-01-01
Undergraduate biochemistry laboratory courses often do not provide students with an authentic research experience, particularly when the express purpose of the laboratory is purely instructional. However, an instructional laboratory course that is inquiry- and research-based could simultaneously impart scientific knowledge and foster a student's research expertise and confidence. We have developed a year-long undergraduate biochemistry laboratory curriculum wherein students determine, via experiment and computation, the function of a protein of known three-dimensional structure. The first half of the course is inquiry-based and modular in design; students learn general biochemical techniques while gaining preparation for research experiments in the second semester. Having learned standard biochemical methods in the first semester, students independently pursue their own (original) research projects in the second semester. This new curriculum has yielded an improvement in student performance and confidence as assessed by various metrics. To disseminate teaching resources to students and instructors alike, a freely accessible Biochemistry Laboratory Education resource is available at http://biochemlab.org. © 2015 The Authors Biochemistry and Molecular Biology Education published by Wiley Periodicals, Inc. on behalf of International Union of Biochemistry and Molecular Biology.
A remote laboratory for USRP-based software defined radio
NASA Astrophysics Data System (ADS)
Gandhinagar Ekanthappa, Rudresh; Escobar, Rodrigo; Matevossian, Achot; Akopian, David
2014-02-01
Electrical and computer engineering graduates need practical working skills with real-world electronic devices, which are addressed to some extent by hands-on laboratories. Deployment capacity of hands-on laboratories is typically constrained due to insufficient equipment availability, facility shortages, and lack of human resources for in-class support and maintenance. At the same time, at many sites, existing experimental systems are usually underutilized due to class scheduling bottlenecks. Nowadays, online education gains popularity and remote laboratories have been suggested to broaden access to experimentation resources. Remote laboratories resolve many problems as various costs can be shared, and student access to instrumentation is facilitated in terms of access time and locations. Labs are converted to homeworks that can be done without physical presence in laboratories. Even though they are not providing full sense of hands-on experimentation, remote labs are a viable alternatives for underserved educational sites. This paper studies remote modality of USRP-based radio-communication labs offered by National Instruments (NI). The labs are offered to graduate and undergraduate students and tentative assessments support feasibility of remote deployments.
Polynomial approximation of non-Gaussian unitaries by counting one photon at a time
NASA Astrophysics Data System (ADS)
Arzani, Francesco; Treps, Nicolas; Ferrini, Giulia
2017-05-01
In quantum computation with continuous-variable systems, quantum advantage can only be achieved if some non-Gaussian resource is available. Yet, non-Gaussian unitary evolutions and measurements suited for computation are challenging to realize in the laboratory. We propose and analyze two methods to apply a polynomial approximation of any unitary operator diagonal in the amplitude quadrature representation, including non-Gaussian operators, to an unknown input state. Our protocols use as a primary non-Gaussian resource a single-photon counter. We use the fidelity of the transformation with the target one on Fock and coherent states to assess the quality of the approximate gate.
A Distributed Laboratory for Event-Driven Coastal Prediction and Hazard Planning
NASA Astrophysics Data System (ADS)
Bogden, P.; Allen, G.; MacLaren, J.; Creager, G. J.; Flournoy, L.; Sheng, Y. P.; Graber, H.; Graves, S.; Conover, H.; Luettich, R.; Perrie, W.; Ramakrishnan, L.; Reed, D. A.; Wang, H. V.
2006-12-01
The 2005 Atlantic hurricane season was the most active in recorded history. Collectively, 2005 hurricanes caused more than 2,280 deaths and record damages of over 100 billion dollars. Of the storms that made landfall, Dennis, Emily, Katrina, Rita, and Wilma caused most of the destruction. Accurate predictions of storm-driven surge, wave height, and inundation can save lives and help keep recovery costs down, provided the information gets to emergency response managers in time. The information must be available well in advance of landfall so that responders can weigh the costs of unnecessary evacuation against the costs of inadequate preparation. The SURA Coastal Ocean Observing and Prediction (SCOOP) Program is a multi-institution collaboration implementing a modular, distributed service-oriented architecture for real time prediction and visualization of the impacts of extreme atmospheric events. The modular infrastructure enables real-time prediction of multi- scale, multi-model, dynamic, data-driven applications. SURA institutions are working together to create a virtual and distributed laboratory integrating coastal models, simulation data, and observations with computational resources and high speed networks. The loosely coupled architecture allows teams of computer and coastal scientists at multiple institutions to innovate complex system components that are interconnected with relatively stable interfaces. The operational system standardizes at the interface level to enable substantial innovation by complementary communities of coastal and computer scientists. This architectural philosophy solves a long-standing problem associated with the transition from research to operations. The SCOOP Program thereby implements a prototype laboratory consistent with the vision of a national, multi-agency initiative called the Integrated Ocean Observing System (IOOS). Several service- oriented components of the SCOOP enterprise architecture have already been designed and implemented, including data archive and transport services, metadata registry and retrieval (catalog), resource management, and portal interfaces. SCOOP partners are integrating these at the service level and implementing reconfigurable workflows for several kinds of user scenarios, and are working with resource providers to prototype new policies and technologies for on-demand computing.
U.S. hydropower resource assessment for Idaho
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conner, A.M.; Francfort, J.E.
1998-08-01
The US Department of Energy is developing an estimate of the undeveloped hydropower potential in the US. The Hydropower Evaluation Software (HES) is a computer model that was developed by the Idaho National Engineering and Environmental Laboratory for this purpose. HES measures the undeveloped hydropower resources available in the US, using uniform criteria for measurement. The software was developed and tested using hydropower information and data provided by the Southwestern Power Administration. It is a menu-driven program that allows the personal computer user to assign environmental attributes to potential hydropower sites, calculate development suitability factors for each site based onmore » the environmental attributes present, and generate reports based on these suitability factors. This report describes the resource assessment results for the State of Idaho.« less
Resource Aware Intelligent Network Services (RAINS) Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, Tom; Yang, Xi
The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyber infrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum ofmore » compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyber infrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate, maintain, and distribute MRML based resource descriptions. Once all of the resource topologies are absorbed by the RCE, a connected graph of the full distributed system topology is constructed, which forms the basis for computation and workflow processing. The RCE includes a Modular Computation Element (MCE) framework which allows for tailoring of the computation process to the specific set of resources under control, and the services desired. The input and output of an MCE are both model data based on MRS/MRML ontology and schema. Some of the RAINS project accomplishments include: Development of general and extensible multi-resource modeling framework; Design of a Resource Computation Engine (RCE) system which includes the following key capabilities; Absorb a variety of multi-resource model types and build integrated models; Novel architecture which uses model based communications across the full stack for all Flexible provision of abstract or intent based user facing interfaces; Workflow processing based on model descriptions; Release of the RCE as an open source software; Deployment of RCE in the University of Maryland/Mid-Atlantic Crossroad ScienceDMZ in prototype mode with a plan under way to transition to production; Deployment at the Argonne National Laboratory DTN Facility in prototype mode; Selection of RCE by the DOE SENSE (SDN for End-to-end Networked Science at the Exascale) project as the basis for their orchestration service.« less
The AMTEX Partnership{trademark} mid year report, fiscal year 1997
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-03-01
The AMTEX Partnership{trademark} is a collaborative research and development program among the US Integrated Textile Complex (ITC), the US Department of Energy (DOE), the DOE national laboratories, other federal agencies and laboratories, and universities. The goal of AMTEX is to strengthen the competitiveness of this vital industry, thereby preserving and creating US jobs. Three AMTEX projects funded in FY 1997 are Diamond Activated Manufacturing Architecture (DAMA), Computer-Aided Fabric Evaluation (CAFE), and Textile Resource Conservation (TReC). The five sites involved in AMTEX work are Sandia National Laboratory (SNL), Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), the Oak Ridgemore » Y-12 Plant, and the Oak Ridge National Laboratory (ORNL) (the latter is funded through Y-12).« less
TEACHING "MATH-LITE" CONSERVATION (BOOK REVIEW OF CONSERVATION BIOLOGY WITH RAMAS ECOLAB)
This book is designed to serve as a laboratory workbook for an undergraduate course in conservation biology, environmental science, or natural resource management. By integrating with RAMAS EcoLab software, the book provides instructors with hands-on computer exercises that can ...
SOCR: Statistics Online Computational Resource
Dinov, Ivo D.
2011-01-01
The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741
Final Report for ALCC Allocation: Predictive Simulation of Complex Flow in Wind Farms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew F.; Ananthan, Shreyas; Churchfield, Matt
This report documents work performed using ALCC computing resources granted under a proposal submitted in February 2016, with the resource allocation period spanning the period July 2016 through June 2017. The award allocation was 10.7 million processor-hours at the National Energy Research Scientific Computing Center. The simulations performed were in support of two projects: the Atmosphere to Electrons (A2e) project, supported by the DOE EERE office; and the Exascale Computing Project (ECP), supported by the DOE Office of Science. The project team for both efforts consists of staff scientists and postdocs from Sandia National Laboratories and the National Renewable Energymore » Laboratory. At the heart of these projects is the open-source computational-fluid-dynamics (CFD) code, Nalu. Nalu solves the low-Mach-number Navier-Stokes equations using an unstructured- grid discretization. Nalu leverages the open-source Trilinos solver library and the Sierra Toolkit (STK) for parallelization and I/O. This report documents baseline computational performance of the Nalu code on problems of direct relevance to the wind plant physics application - namely, Large Eddy Simulation (LES) of an atmospheric boundary layer (ABL) flow and wall-modeled LES of a flow past a static wind turbine rotor blade. Parallel performance of Nalu and its constituent solver routines residing in the Trilinos library has been assessed previously under various campaigns. However, both Nalu and Trilinos have been, and remain, in active development and resources have not been available previously to rigorously track code performance over time. With the initiation of the ECP, it is important to establish and document baseline code performance on the problems of interest. This will allow the project team to identify and target any deficiencies in performance, as well as highlight any performance bottlenecks as we exercise the code on a greater variety of platforms and at larger scales. The current study is rather modest in scale, examining performance on problem sizes of O(100 million) elements and core counts up to 8k cores. This will be expanded as more computational resources become available to the projects.« less
Final report and recommendations of the ESnet Authentication Pilot Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, G.R.; Moore, J.P.; Athey, C.L.
1995-01-01
To conduct their work, U.S. Department of Energy (DOE) researchers require access to a wide range of computing systems and information resources outside of their respective laboratories. Electronically communicating with peers using the global Internet has become a necessity to effective collaboration with university, industrial, and other government partners. DOE`s Energy Sciences Network (ESnet) needs to be engineered to facilitate this {open_quotes}collaboratory{close_quotes} while ensuring the protection of government computing resources from unauthorized use. Sensitive information and intellectual properties must be protected from unauthorized disclosure, modification, or destruction. In August 1993, DOE funded four ESnet sites (Argonne National Laboratory, Lawrence Livermoremore » National Laboratory, the National Energy Research Supercomputer Center, and Pacific Northwest Laboratory) to begin implementing and evaluating authenticated ESnet services using the advanced Kerberos Version 5. The purpose of this project was to identify, understand, and resolve the technical, procedural, cultural, and policy issues surrounding peer-to-peer authentication in an inter-organization internet. The investigators have concluded that, with certain conditions, Kerberos Version 5 is a suitable technology to enable ESnet users to freely share resources and information without compromising the integrity of their systems and data. The pilot project has demonstrated that Kerberos Version 5 is capable of supporting trusted third-party authentication across an inter-organization internet and that Kerberos Version 5 would be practical to implement across the ESnet community within the U.S. The investigators made several modifications to the Kerberos Version 5 system that are necessary for operation in the current Internet environment and have documented other technical shortcomings that must be addressed before large-scale deployment is attempted.« less
Infrastructure Systems for Advanced Computing in E-science applications
NASA Astrophysics Data System (ADS)
Terzo, Olivier
2013-04-01
In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.
Known structure, unknown function: An inquiry‐based undergraduate biochemistry laboratory course
Gray, Cynthia; Price, Carol W.; Lee, Christopher T.; Dewald, Alison H.; Cline, Matthew A.; McAnany, Charles E.
2015-01-01
Abstract Undergraduate biochemistry laboratory courses often do not provide students with an authentic research experience, particularly when the express purpose of the laboratory is purely instructional. However, an instructional laboratory course that is inquiry‐ and research‐based could simultaneously impart scientific knowledge and foster a student's research expertise and confidence. We have developed a year‐long undergraduate biochemistry laboratory curriculum wherein students determine, via experiment and computation, the function of a protein of known three‐dimensional structure. The first half of the course is inquiry‐based and modular in design; students learn general biochemical techniques while gaining preparation for research experiments in the second semester. Having learned standard biochemical methods in the first semester, students independently pursue their own (original) research projects in the second semester. This new curriculum has yielded an improvement in student performance and confidence as assessed by various metrics. To disseminate teaching resources to students and instructors alike, a freely accessible Biochemistry Laboratory Education resource is available at http://biochemlab.org. © 2015 The Authors Biochemistry and Molecular Biology Education published by Wiley Periodicals, Inc. on behalf of International Union of Biochemistry and Molecular Biology, 43(4):245–262, 2015. PMID:26148241
Ultra-fast Object Recognition from Few Spikes
2005-07-06
Computer Science and Artificial Intelligence Laboratory Ultra-fast Object Recognition from Few Spikes Chou Hung, Gabriel Kreiman , Tomaso Poggio...neural code for different kinds of object-related information. *The authors, Chou Hung and Gabriel Kreiman , contributed equally to this work...Supplementary Material is available at http://ramonycajal.mit.edu/ kreiman /resources/ultrafast
An Antibiotic Resource Program for Students of the Health Professions.
ERIC Educational Resources Information Center
Tritz, Gerald J.
1986-01-01
Provides a description of a computer program developed to supplement instruction in testing of antibiotics on clinical isolates of microorganisms. The program is a simulation and database for interpretation of experimental data designed to enhance laboratory learning and prepare future physicians to use computerized diagnostic instrumentation and…
Computer Simulations Improve University Instructional Laboratories1
2004-01-01
Laboratory classes are commonplace and essential in biology departments but can sometimes be cumbersome, unreliable, and a drain on time and resources. As university intakes increase, pressure on budgets and staff time can often lead to reduction in practical class provision. Frequently, the ability to use laboratory equipment, mix solutions, and manipulate test animals are essential learning outcomes, and “wet” laboratory classes are thus appropriate. In others, however, interpretation and manipulation of the data are the primary learning outcomes, and here, computer-based simulations can provide a cheaper, easier, and less time- and labor-intensive alternative. We report the evaluation of two computer-based simulations of practical exercises: the first in chromosome analysis, the second in bioinformatics. Simulations can provide significant time savings to students (by a factor of four in our first case study) without affecting learning, as measured by performance in assessment. Moreover, under certain circumstances, performance can be improved by the use of simulations (by 7% in our second case study). We concluded that the introduction of these simulations can significantly enhance student learning where consideration of the learning outcomes indicates that it might be appropriate. In addition, they can offer significant benefits to teaching staff. PMID:15592599
The AMTEX Partnership{trademark}. First quarter report, Fiscal year 1996
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-12-01
The AMTEX Partnership is a collaborative research and development program among the US Integrated Textile Industry, DOE, the National Laboratories, other federal agencies and laboratories, and universities. The goal of AMTEX is to strengthen the competitiveness of this vital industry, thereby preserving and creating US jobs. Topics in this quarters report include: computer-aided fabric evaluation, cotton biotechnology, demand activated manufacturing architecture, electronic embedded fingerprints, on-line process control in flexible fiber manufacturing, rapid cutting, sensors for agile manufacturing, and textile resource conservation.
Quality assurance for health and environmental chemistry: 1990
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gautier, M.A.; Gladney, E.S.; Koski, N.L.
1991-10-01
This report documents the continuing quality assurance efforts of the Health and Environmental Chemistry Group (HSE-9) at the Los Alamos National Laboratory. The philosophy, methodology, computing resources, and laboratory information management system used by the quality assurance program to encompass the diversity of analytical chemistry practiced in the group are described. Included in the report are all quality assurance reference materials used, along with their certified or consensus concentrations, and all analytical chemistry quality assurance measurements made by HSE-9 during 1990.
Emulating a million machines to investigate botnets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudish, Donald W.
2010-06-01
Researchers at Sandia National Laboratories in Livermore, California are creating what is in effect a vast digital petridish able to hold one million operating systems at once in an effort to study the behavior of rogue programs known as botnets. Botnets are used extensively by malicious computer hackers to steal computing power fron Internet-connected computers. The hackers harness the stolen resources into a scattered but powerful computer that can be used to send spam, execute phishing, scams or steal digital information. These remote-controlled 'distributed computers' are difficult to observe and track. Botnets may take over parts of tens of thousandsmore » or in some cases even millions of computers, making them among the world's most powerful computers for some applications.« less
Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*
Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.
2015-01-01
Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363
Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.
Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L
2015-02-01
Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
NASA Technical Reports Server (NTRS)
1972-01-01
A user's manual is provided for the environmental computer model proposed for the Richmond-Cape Henry Environmental Laboratory (RICHEL) application project for coastal zone land use investigations and marine resources management. The model was developed around the hydrologic cycle and includes two data bases consisting of climate and land use variables. The main program is described, along with control parameters to be set and pertinent subroutines.
Fermilab computing at the Intensity Frontier
Group, Craig; Fuess, S.; Gutsche, O.; ...
2015-12-23
The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less
Computational Science in Armenia (Invited Talk)
NASA Astrophysics Data System (ADS)
Marandjian, H.; Shoukourian, Yu.
This survey is devoted to the development of informatics and computer science in Armenia. The results in theoretical computer science (algebraic models, solutions to systems of general form recursive equations, the methods of coding theory, pattern recognition and image processing), constitute the theoretical basis for developing problem-solving-oriented environments. As examples can be mentioned: a synthesizer of optimized distributed recursive programs, software tools for cluster-oriented implementations of two-dimensional cellular automata, a grid-aware web interface with advanced service trading for linear algebra calculations. In the direction of solving scientific problems that require high-performance computing resources, examples of completed projects include the field of physics (parallel computing of complex quantum systems), astrophysics (Armenian virtual laboratory), biology (molecular dynamics study of human red blood cell membrane), meteorology (implementing and evaluating the Weather Research and Forecast Model for the territory of Armenia). The overview also notes that the Institute for Informatics and Automation Problems of the National Academy of Sciences of Armenia has established a scientific and educational infrastructure, uniting computing clusters of scientific and educational institutions of the country and provides the scientific community with access to local and international computational resources, that is a strong support for computational science in Armenia.
The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications
NASA Technical Reports Server (NTRS)
Johnston, William E.
2002-01-01
With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.
Production Experiences with the Cray-Enabled TORQUE Resource Manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, Matthew A; Maxwell, Don E; Beer, David
High performance computing resources utilize batch systems to manage the user workload. Cray systems are uniquely different from typical clusters due to Cray s Application Level Placement Scheduler (ALPS). ALPS manages binary transfer, job launch and monitoring, and error handling. Batch systems require special support to integrate with ALPS using an XML protocol called BASIL. Previous versions of Adaptive Computing s TORQUE and Moab batch suite integrated with ALPS from within Moab, using PERL scripts to interface with BASIL. This would occasionally lead to problems when all the components would become unsynchronized. Version 4.1 of the TORQUE Resource Manager introducedmore » new features that allow it to directly integrate with ALPS using BASIL. This paper describes production experiences at Oak Ridge National Laboratory using the new TORQUE software versions, as well as ongoing and future work to improve TORQUE.« less
Systems Engineering 2010 Workshop | Wind | NREL
turbine aeroelastic model, inflow turbulence model, wind plan layout and interactions, resource model, O on the approach to wind turbine design, choice, and deployment 2:40 Break Computer Science perspective) International Laboratories 3:20 Bernard Bulder, ECN Integral Wind Turbine Design with Focus-6 3
An Assessment of Remote Laboratory Experiments in Radio Communication
ERIC Educational Resources Information Center
Gampe, Andreas; Melkonyan, Arsen; Pontual, Murillo; Akopian, David
2014-01-01
Today's electrical and computer engineering graduates need marketable skills to work with electronic devices. Hands-on experiments prepare students to deal with real-world problems and help them to comprehend theoretical concepts and relate these to practical tasks. However, shortage of equipment, high costs, and a lack of human resources for…
The use of ARL trajectories for the evaluation of precipitation chemistry data
John M. Miller; James N. Galloway; Gene E. Likens
1976-01-01
One of the major problems in interpreting precipitation chemistry data is determining the possible source areas of the materials found in the precipitation. To investigate this problem, the trajectory program developed at Air Resources Laboratories (NOAA) was used to compute five-day backward air trajectories from Ithaca, New York.
UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.
Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L
2012-03-01
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Lockheed Martin Idaho Technologies Company information management technology architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, M.J.; Lau, P.K.S.
1996-05-01
The Information Management Technology Architecture (TA) is being driven by the business objectives of reducing costs and improving effectiveness. The strategy is to reduce the cost of computing through standardization. The Lockheed Martin Idaho Technologies Company (LMITCO) TA is a set of standards and products for use at the Idaho National Engineering Laboratory (INEL). The TA will provide direction for information management resource acquisitions, development of information systems, formulation of plans, and resolution of issues involving LMITCO computing resources. Exceptions to the preferred products may be granted by the Information Management Executive Council (IMEC). Certain implementation and deployment strategies aremore » inherent in the design and structure of LMITCO TA. These include: migration from centralized toward distributed computing; deployment of the networks, servers, and other information technology infrastructure components necessary for a more integrated information technology support environment; increased emphasis on standards to make it easier to link systems and to share information; and improved use of the company`s investment in desktop computing resources. The intent is for the LMITCO TA to be a living document constantly being reviewed to take advantage of industry directions to reduce costs while balancing technological diversity with business flexibility.« less
Radiochemistry, PET Imaging, and the Internet of Chemical Things
2016-01-01
The Internet of Chemical Things (IoCT), a growing network of computers, mobile devices, online resources, software suites, laboratory equipment, synthesis apparatus, analytical devices, and a host of other machines, all interconnected to users, manufacturers, and others through the infrastructure of the Internet, is changing how we do chemistry. While in its infancy across many chemistry laboratories and departments, it became apparent when considering our own work synthesizing radiopharmaceuticals for positron emission tomography (PET) that a more mature incarnation of the IoCT already exists. How does the IoCT impact our lives today, and what does it hold for the smart (radio)chemical laboratories of the future? PMID:27610410
Radiochemistry, PET Imaging, and the Internet of Chemical Things.
Thompson, Stephen; Kilbourn, Michael R; Scott, Peter J H
2016-08-24
The Internet of Chemical Things (IoCT), a growing network of computers, mobile devices, online resources, software suites, laboratory equipment, synthesis apparatus, analytical devices, and a host of other machines, all interconnected to users, manufacturers, and others through the infrastructure of the Internet, is changing how we do chemistry. While in its infancy across many chemistry laboratories and departments, it became apparent when considering our own work synthesizing radiopharmaceuticals for positron emission tomography (PET) that a more mature incarnation of the IoCT already exists. How does the IoCT impact our lives today, and what does it hold for the smart (radio)chemical laboratories of the future?
The November 1, 2017 issue of Cancer Research is dedicated to a collection of computational resource papers in genomics, proteomics, animal models, imaging, and clinical subjects for non-bioinformaticists looking to incorporate computing tools into their work. Scientists at Pacific Northwest National Laboratory have developed P-MartCancer, an open, web-based interactive software tool that enables statistical analyses of peptide or protein data generated from mass-spectrometry (MS)-based global proteomics experiments.
Catalog of Research Abstracts, 1993: Partnership opportunities at Lawrence Berkeley Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-09-01
The 1993 edition of Lawrence Berkeley Laboratory`s Catalog of Research Abstracts is a comprehensive listing of ongoing research projects in LBL`s ten research divisions. Lawrence Berkeley Laboratory (LBL) is a major multi-program national laboratory managed by the University of California for the US Department of Energy (DOE). LBL has more than 3000 employees, including over 1000 scientists and engineers. With an annual budget of approximately $250 million, LBL conducts a wide range of research activities, many that address the long-term needs of American industry and have the potential for a positive impact on US competitiveness. LBL actively seeks to sharemore » its expertise with the private sector to increase US competitiveness in world markets. LBL has transferable expertise in conservation and renewable energy, environmental remediation, materials sciences, computing sciences, and biotechnology, which includes fundamental genetic research and nuclear medicine. This catalog gives an excellent overview of LBL`s expertise, and is a good resource for those seeking partnerships with national laboratories. Such partnerships allow private enterprise access to the exceptional scientific and engineering capabilities of the federal laboratory systems. Such arrangements also leverage the research and development resources of the private partner. Most importantly, they are a means of accessing the cutting-edge technologies and innovations being discovered every day in our federal laboratories.« less
A Review of Enhanced Sampling Approaches for Accelerated Molecular Dynamics
NASA Astrophysics Data System (ADS)
Tiwary, Pratyush; van de Walle, Axel
Molecular dynamics (MD) simulations have become a tool of immense use and popularity for simulating a variety of systems. With the advent of massively parallel computer resources, one now routinely sees applications of MD to systems as large as hundreds of thousands to even several million atoms, which is almost the size of most nanomaterials. However, it is not yet possible to reach laboratory timescales of milliseconds and beyond with MD simulations. Due to the essentially sequential nature of time, parallel computers have been of limited use in solving this so-called timescale problem. Instead, over the years a large range of statistical mechanics based enhanced sampling approaches have been proposed for accelerating molecular dynamics, and accessing timescales that are well beyond the reach of the fastest computers. In this review we provide an overview of these approaches, including the underlying theory, typical applications, and publicly available software resources to implement them.
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1980-01-01
A generalized three dimensional perspective software capability was developed within the framework of a low cost computer oriented geographically based information system using the Earth Resources Laboratory Applications Software (ELAS) operating subsystem. This perspective software capability, developed primarily to support data display requirements at the NASA/NSTL Earth Resources Laboratory, provides a means of displaying three dimensional feature space object data in two dimensional picture plane coordinates and makes it possible to overlay different types of information on perspective drawings to better understand the relationship of physical features. An example topographic data base is constructed and is used as the basic input to the plotting module. Examples are shown which illustrate oblique viewing angles that convey spatial concepts and relationships represented by the topographic data planes.
Extensible Computational Chemistry Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-08-09
ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing the power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of themore » inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less
DOE pushes for useful quantum computing
NASA Astrophysics Data System (ADS)
Cho, Adrian
2018-01-01
The U.S. Department of Energy (DOE) is joining the quest to develop quantum computers, devices that would exploit quantum mechanics to crack problems that overwhelm conventional computers. The initiative comes as Google and other companies race to build a quantum computer that can demonstrate "quantum supremacy" by beating classical computers on a test problem. But reaching that milestone will not mean practical uses are at hand, and the new $40 million DOE effort is intended to spur the development of useful quantum computing algorithms for its work in chemistry, materials science, nuclear physics, and particle physics. With the resources at its 17 national laboratories, DOE could play a key role in developing the machines, researchers say, although finding problems with which quantum computers can help isn't so easy.
Ground data systems resource allocation process
NASA Technical Reports Server (NTRS)
Berner, Carol A.; Durham, Ralph; Reilly, Norman B.
1989-01-01
The Ground Data Systems Resource Allocation Process at the Jet Propulsion Laboratory provides medium- and long-range planning for the use of Deep Space Network and Mission Control and Computing Center resources in support of NASA's deep space missions and Earth-based science. Resources consist of radio antenna complexes and associated data processing and control computer networks. A semi-automated system was developed that allows operations personnel to interactively generate, edit, and revise allocation plans spanning periods of up to ten years (as opposed to only two or three weeks under the manual system) based on the relative merit of mission events. It also enhances scientific data return. A software system known as the Resource Allocation and Planning Helper (RALPH) merges the conventional methods of operations research, rule-based knowledge engineering, and advanced data base structures. RALPH employs a generic, highly modular architecture capable of solving a wide variety of scheduling and resource sequencing problems. The rule-based RALPH system has saved significant labor in resource allocation. Its successful use affirms the importance of establishing and applying event priorities based on scientific merit, and the benefit of continuity in planning provided by knowledge-based engineering. The RALPH system exhibits a strong potential for minimizing development cycles of resource and payload planning systems throughout NASA and the private sector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurie, Carol
2017-02-01
This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Office of Energy Efficiency and Renewable Energy
This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.
Overview of DOE Oil and Gas Field Laboratory Projects
NASA Astrophysics Data System (ADS)
Bromhal, G.; Ciferno, J.; Covatch, G.; Folio, E.; Melchert, E.; Ogunsola, O.; Renk, J., III; Vagnetti, R.
2017-12-01
America's abundant unconventional oil and natural gas (UOG) resources are critical components of our nation's energy portfolio. These resources need to be prudently developed to derive maximum benefits. In spite of the long history of hydraulic fracturing, the optimal number of fracturing stages during multi-stage fracture stimulation in horizontal wells is not known. In addition, there is the dire need of a comprehensive understanding of ways to improve the recovery of shale gas with little or no impacts on the environment. Research that seeks to expand our view of effective and environmentally sustainable ways to develop our nation's oil and natural gas resources can be done in the laboratory or at a computer; but, some experiments must be performed in a field setting. The Department of Energy (DOE) Field Lab Observatory projects are designed to address those research questions that must be studied in the field. The Department of Energy (DOE) is developing a suite of "field laboratory" test sites to carry out collaborative research that will help find ways of improving the recovery of energy resources as much as possible, with as little environmental impact as possible, from "unconventional" formations, such as shale and other low permeability rock formations. Currently there are three field laboratories in various stages of development and operation. Work is on-going at two of the sites: The Hydraulic Fracturing Test Site (HFTS) in the Permian Basin and the Marcellus Shale Energy and Environmental Lab (MSEEL) project in the Marcellus Shale Play. Agreement on the third site, the Utica Shale Energy and Environmental Lab (USEEL) project in the Utica Shale Play, was just recently finalized. Other field site opportunities may be forthcoming. This presentation will give an overview of the three field laboratory projects.
@berkeley.edu 510-642-1220 Research profile » A U.S. Department of Energy National Laboratory Operated by the Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Investigators Division Staff Facilities and Centers Staff Jobs Safety Personnel Resources Committees In Case of
Learning Commons in Academic Libraries: Discussing Themes in the Literature from 2001 to the Present
ERIC Educational Resources Information Center
Blummer, Barbara; Kenton, Jeffrey M.
2017-01-01
Although the term lacks a standard definition, learning commons represent academic library spaces that provide computer and library resources as well as a range of academic services that support learners and learning. Learning commons have been equated to a laboratory for creating knowledge and staffed with librarians that serve as facilitators of…
Virtual Labs (Science Gateways) as platforms for Free and Open Source Science
NASA Astrophysics Data System (ADS)
Lescinsky, David; Car, Nicholas; Fraser, Ryan; Friedrich, Carsten; Kemp, Carina; Squire, Geoffrey
2016-04-01
The Free and Open Source Software (FOSS) movement promotes community engagement in software development, as well as provides access to a range of sophisticated technologies that would be prohibitively expensive if obtained commercially. However, as geoinformatics and eResearch tools and services become more dispersed, it becomes more complicated to identify and interface between the many required components. Virtual Laboratories (VLs, also known as Science Gateways) simplify the management and coordination of these components by providing a platform linking many, if not all, of the steps in particular scientific processes. These enable scientists to focus on their science, rather than the underlying supporting technologies. We describe a modular, open source, VL infrastructure that can be reconfigured to create VLs for a wide range of disciplines. Development of this infrastructure has been led by CSIRO in collaboration with Geoscience Australia and the National Computational Infrastructure (NCI) with support from the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service (ANDS). Initially, the infrastructure was developed to support the Virtual Geophysical Laboratory (VGL), and has subsequently been repurposed to create the Virtual Hazards Impact and Risk Laboratory (VHIRL) and the reconfigured Australian National Virtual Geophysics Laboratory (ANVGL). During each step of development, new capabilities and services have been added and/or enhanced. We plan on continuing to follow this model using a shared, community code base. The VL platform facilitates transparent and reproducible science by providing access to both the data and methodologies used during scientific investigations. This is further enhanced by the ability to set up and run investigations using computational resources accessed through the VL. Data is accessed using registries pointing to catalogues within public data repositories (notably including the NCI National Environmental Research Data Interoperability Platform), or by uploading data directly from user supplied addresses or files. Similarly, scientific software is accessed through registries pointing to software repositories (e.g., GitHub). Runs are configured by using or modifying default templates designed by subject matter experts. After the appropriate computational resources are identified by the user, Virtual Machines (VMs) are spun up and jobs are submitted to service providers (currently the NeCTAR public cloud or Amazon Web Services). Following completion of the jobs the results can be reviewed and downloaded if desired. By providing a unified platform for science, the VL infrastructure enables sophisticated provenance capture and management. The source of input data (including both collection and queries), user information, software information (version and configuration details) and output information are all captured and managed as a VL resource which can be linked to output data sets. This provenance resource provides a mechanism for publication and citation for Free and Open Source Science.
Computation Directorate Annual Report 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L; McGraw, J R; Ashby, S F
Big computers are icons: symbols of the culture, and of the larger computing infrastructure that exists at Lawrence Livermore. Through the collective effort of Laboratory personnel, they enable scientific discovery and engineering development on an unprecedented scale. For more than three decades, the Computation Directorate has supplied the big computers that enable the science necessary for Laboratory missions and programs. Livermore supercomputing is uniquely mission driven. The high-fidelity weapon simulation capabilities essential to the Stockpile Stewardship Program compel major advances in weapons codes and science, compute power, and computational infrastructure. Computation's activities align with this vital mission of the Departmentmore » of Energy. Increasingly, non-weapons Laboratory programs also rely on computer simulation. World-class achievements have been accomplished by LLNL specialists working in multi-disciplinary research and development teams. In these teams, Computation personnel employ a wide array of skills, from desktop support expertise, to complex applications development, to advanced research. Computation's skilled professionals make the Directorate the success that it has become. These individuals know the importance of the work they do and the many ways it contributes to Laboratory missions. They make appropriate and timely decisions that move the entire organization forward. They make Computation a leader in helping LLNL achieve its programmatic milestones. I dedicate this inaugural Annual Report to the people of Computation in recognition of their continuing contributions. I am proud that we perform our work securely and safely. Despite increased cyber attacks on our computing infrastructure from the Internet, advanced cyber security practices ensure that our computing environment remains secure. Through Integrated Safety Management (ISM) and diligent oversight, we address safety issues promptly and aggressively. The safety of our employees, whether at work or at home, is a paramount concern. Even as the Directorate meets today's supercomputing requirements, we are preparing for the future. We are investigating open-source cluster technology, the basis of our highly successful Mulitprogrammatic Capability Resource (MCR). Several breakthrough discoveries have resulted from MCR calculations coupled with theory and experiment, prompting Laboratory scientists to demand ever-greater capacity and capability. This demand is being met by a new 23-TF system, Thunder, with architecture modeled on MCR. In preparation for the ''after-next'' computer, we are researching technology even farther out on the horizon--cell-based computers. Assuming that the funding and the technology hold, we will acquire the cell-based machine BlueGene/L within the next 12 months.« less
Overview of the LINCS architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fletcher, J.G.; Watson, R.W.
1982-01-13
Computing at the Lawrence Livermore National Laboratory (LLNL) has evolved over the past 15 years with a computer network based resource sharing environment. The increasing use of low cost and high performance micro, mini and midi computers and commercially available local networking systems will accelerate this trend. Further, even the large scale computer systems, on which much of the LLNL scientific computing depends, are evolving into multiprocessor systems. It is our belief that the most cost effective use of this environment will depend on the development of application systems structured into cooperating concurrent program modules (processes) distributed appropriately over differentmore » nodes of the environment. A node is defined as one or more processors with a local (shared) high speed memory. Given the latter view, the environment can be characterized as consisting of: multiple nodes communicating over noisy channels with arbitrary delays and throughput, heterogenous base resources and information encodings, no single administration controlling all resources, distributed system state, and no uniform time base. The system design problem is - how to turn the heterogeneous base hardware/firmware/software resources of this environment into a coherent set of resources that facilitate development of cost effective, reliable, and human engineered applications. We believe the answer lies in developing a layered, communication oriented distributed system architecture; layered and modular to support ease of understanding, reconfiguration, extensibility, and hiding of implementation or nonessential local details; communication oriented because that is a central feature of the environment. The Livermore Interactive Network Communication System (LINCS) is a hierarchical architecture designed to meet the above needs. While having characteristics in common with other architectures, it differs in several respects.« less
Russ, Alissa L; Weiner, Michael; Russell, Scott A; Baker, Darrell A; Fahner, W Jeffrey; Saleem, Jason J
2012-12-01
Although the potential benefits of more usable health information technologies (HIT) are substantial-reduced HIT support costs, increased work efficiency, and improved patient safety--human factors methods to improve usability are rarely employed. The US Department of Veterans Affairs (VA) has emerged as an early leader in establishing usability laboratories to inform the design of HIT, including its electronic health record. Experience with a usability laboratory at a VA Medical Center provides insights on how to design, implement, and leverage usability laboratories in the health care setting. The VA Health Services Research and Development Service Human-Computer Interaction & Simulation Laboratory emerged as one of the first VA usability laboratories and was intended to provide research-based findings about HIT designs. This laboratory supports rapid prototyping, formal usability testing, and analysis tools to assess existing technologies, alternative designs, and potential future technologies. RESULTS OF IMPLEMENTATION: Although the laboratory has maintained a research focus, it has become increasingly integrated with VA operations, both within the medical center and on a national VA level. With this resource, data-driven recommendations have been provided for the design of HIT applications before and after implementation. The demand for usability testing of HIT is increasing, and information on how to develop usability laboratories for the health care setting is often needed. This article may assist other health care organizations that want to invest in usability resources to improve HIT. The establishment and utilization of usability laboratories in the health care setting may improve HIT designs and promote safe, high-quality care for patients.
Radiochemistry, PET Imaging, and the Internet of Chemical Things
Thompson, Stephen; Kilbourn, Michael R.; Scott, Peter J. H.
2016-08-16
The Internet of Chemical Things (IoCT), a growing network of computers, mobile devices, online resources, software suites, laboratory equipment, synthesis apparatus, analytical devices, and a host of other machines, all interconnected to users, manufacturers, and others through the infrastructure of the Internet, is changing how we do chemistry. While in its infancy across many chemistry laboratories and departments, it became apparent when considering our own work synthesizing radiopharmaceuticals for positron emission tomography (PET) that a more mature incarnation of the IoCT already exists. Finally, how does the IoCT impact our lives today, and what does it hold for the smartmore » (radio)chemical laboratories of the future?« less
Radiochemistry, PET Imaging, and the Internet of Chemical Things
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Stephen; Kilbourn, Michael R.; Scott, Peter J. H.
The Internet of Chemical Things (IoCT), a growing network of computers, mobile devices, online resources, software suites, laboratory equipment, synthesis apparatus, analytical devices, and a host of other machines, all interconnected to users, manufacturers, and others through the infrastructure of the Internet, is changing how we do chemistry. While in its infancy across many chemistry laboratories and departments, it became apparent when considering our own work synthesizing radiopharmaceuticals for positron emission tomography (PET) that a more mature incarnation of the IoCT already exists. Finally, how does the IoCT impact our lives today, and what does it hold for the smartmore » (radio)chemical laboratories of the future?« less
The Laboratory for Terrestrial Physics
NASA Technical Reports Server (NTRS)
2003-01-01
The Laboratory for Terrestrial Physics is dedicated to the advancement of knowledge in Earth and planetary science, by conducting innovative research using space technology. The Laboratory's mission and activities support the work and new initiatives at NASA's Goddard Space Flight Center (GSFC). The Laboratory's success contributes to the Earth Science Directorate as a national resource for studies of Earth from Space. The Laboratory is part of the Earth Science Directorate based at the GSFC in Greenbelt, MD. The Directorate itself is comprised of the Global Change Data Center (GCDC), the Space Data and Computing Division (SDCD), and four science Laboratories, including Laboratory for Terrestrial Physics, Laboratory for Atmospheres, and Laboratory for Hydrospheric Processes all in Greenbelt, MD. The fourth research organization, Goddard Institute for Space Studies (GISS), is in New York, NY. Relevant to NASA's Strategic Plan, the Laboratory ensures that all work undertaken and completed is within the vision of GSFC. The philosophy of the Laboratory is to balance the completion of near term goals, while building on the Laboratory's achievements as a foundation for the scientific challenges in the future.
UBioLab: a web-laboratory for ubiquitous in-silico experiments.
Bartocci, Ezio; Cacciagrano, Diletta; Di Berardini, Maria Rita; Merelli, Emanuela; Vito, Leonardo
2012-07-09
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists –for what concerns their management and visualization– and for bioinformaticians –for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle –and possibly to handle in a transparent and uniform way– aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features –as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques– give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Warp-X: A new exascale computing platform for beam–plasma simulations
Vay, J. -L.; Almgren, A.; Bell, J.; ...
2018-01-31
Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less
Warp-X: A new exascale computing platform for beam–plasma simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vay, J. -L.; Almgren, A.; Bell, J.
Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less
How Data Becomes Physics: Inside the RACF
Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris
2018-06-22
The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energyâs (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.
Feltham, R K
1995-01-01
Open tendering for medical informatics systems in the UK has traditionally been lengthy and, therefore, expensive on resources for vendor and purchaser alike. Events in the United Kingdom (UK) and European Community (EC) have led to new Government guidance being published on procuring information systems for the public sector: Procurement of Information Systems Effectively (POISE). This innovative procurement process, launched in 1993, has the support of the Computing Services Association (CSA) and the Federation of the Electronics Industry (FEI). This paper gives an overview of these new UK guidelines on healthcare information system purchasing in the context of a recent procurement project with an NHS Trust Hospital. The aim of the project was to replace three aging, separate, and different laboratory computer systems with a new, integrated turnkey system offering all department modules, an Open modern computer environment, and on-line electronic links to key departmental systems, both within and external to the Trust by the end of 1994. The new system had to complement the Trust's strategy for providing a modern clinical laboratory service to the local population and meet a tight budget.
Scientific Computing Strategic Plan for the Idaho National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiting, Eric Todd
Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less
Computer-generated reminders and quality of pediatric HIV care in a resource-limited setting.
Were, Martin C; Nyandiko, Winstone M; Huang, Kristin T L; Slaven, James E; Shen, Changyu; Tierney, William M; Vreeman, Rachel C
2013-03-01
To evaluate the impact of clinician-targeted computer-generated reminders on compliance with HIV care guidelines in a resource-limited setting. We conducted this randomized, controlled trial in an HIV referral clinic in Kenya caring for HIV-infected and HIV-exposed children (<14 years of age). For children randomly assigned to the intervention group, printed patient summaries containing computer-generated patient-specific reminders for overdue care recommendations were provided to the clinician at the time of the child's clinic visit. For children in the control group, clinicians received the summaries, but no computer-generated reminders. We compared differences between the intervention and control groups in completion of overdue tasks, including HIV testing, laboratory monitoring, initiating antiretroviral therapy, and making referrals. During the 5-month study period, 1611 patients (49% female, 70% HIV-infected) were eligible to receive at least 1 computer-generated reminder (ie, had an overdue clinical task). We observed a fourfold increase in the completion of overdue clinical tasks when reminders were availed to providers over the course of the study (68% intervention vs 18% control, P < .001). Orders also occurred earlier for the intervention group (77 days, SD 2.4 days) compared with the control group (104 days, SD 1.2 days) (P < .001). Response rates to reminders varied significantly by type of reminder and between clinicians. Clinician-targeted, computer-generated clinical reminders are associated with a significant increase in completion of overdue clinical tasks for HIV-infected and exposed children in a resource-limited setting.
Cloudbursting - Solving the 3-body problem
NASA Astrophysics Data System (ADS)
Chang, G.; Heistand, S.; Vakhnin, A.; Huang, T.; Zimdars, P.; Hua, H.; Hood, R.; Koenig, J.; Mehrotra, P.; Little, M. M.; Law, E.
2014-12-01
Many science projects in the future will be accomplished through collaboration among 2 or more NASA centers along with, potentially, external scientists. Science teams will be composed of more geographically dispersed individuals and groups. However, the current computing environment does not make this easy and seamless. By being able to share computing resources among members of a multi-center team working on a science/ engineering project, limited pre-competition funds could be more efficiently applied and technical work could be conducted more effectively with less time spent moving data or waiting for computing resources to free up. Based on the work from an NASA CIO IT Labs task, this presentation will highlight our prototype work in identifying the feasibility and identify the obstacles, both technical and management, to perform "Cloudbursting" among private clouds located at three different centers. We will demonstrate the use of private cloud computing infrastructure at the Jet Propulsion Laboratory, Langley Research Center, and Ames Research Center to provide elastic computation to each other to perform parallel Earth Science data imaging. We leverage elastic load balancing and auto-scaling features at each data center so that each location can independently define how many resources to allocate to a particular job that was "bursted" from another data center and demonstrate that compute capacity scales up and down with the job. We will also discuss future work in the area, which could include the use of cloud infrastructure from different cloud framework providers as well as other cloud service providers.
National resource for computation in chemistry, phase I: evaluation and recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-05-01
The National Resource for Computation in Chemistry (NRCC) was inaugurated at the Lawrence Berkeley Laboratory (LBL) in October 1977, with joint funding by the Department of Energy (DOE) and the National Science Foundation (NSF). The chief activities of the NRCC include: assembling a staff of eight postdoctoral computational chemists, establishing an office complex at LBL, purchasing a midi-computer and graphics display system, administering grants of computer time, conducting nine workshops in selected areas of computational chemistry, compiling a library of computer programs with adaptations and improvements, initiating a software distribution system, providing user assistance and consultation on request. This reportmore » presents assessments and recommendations of an Ad Hoc Review Committee appointed by the DOE and NSF in January 1980. The recommendations are that NRCC should: (1) not fund grants for computing time or research but leave that to the relevant agencies, (2) continue the Workshop Program in a mode similar to Phase I, (3) abandon in-house program development and establish instead a competitive external postdoctoral program in chemistry software development administered by the Policy Board and Director, and (4) not attempt a software distribution system (leaving that function to the QCPE). Furthermore, (5) DOE should continue to make its computational facilities available to outside users (at normal cost rates) and should find some way to allow the chemical community to gain occasional access to a CRAY-level computer.« less
A Virtual Rock Physics Laboratory Through Visualized and Interactive Experiments
NASA Astrophysics Data System (ADS)
Vanorio, T.; Di Bonito, C.; Clark, A. C.
2014-12-01
As new scientific challenges demand more comprehensive and multidisciplinary investigations, laboratory experiments are not expected to become simpler and/or faster. Experimental investigation is an indispensable element of scientific inquiry and must play a central role in the way current and future generations of scientist make decisions. To turn the complexity of laboratory work (and that of rocks!) into dexterity, engagement, and expanded learning opportunities, we are building an interactive, virtual laboratory reproducing in form and function the Stanford Rock Physics Laboratory, at Stanford University. The objective is to combine lectures on laboratory techniques and an online repository of visualized experiments consisting of interactive, 3-D renderings of equipment used to measure properties central to the study of rock physics (e.g., how to saturate rocks, how to measure porosity, permeability, and elastic wave velocity). We use a game creation system together with 3-D computer graphics, and a narrative voice to guide the user through the different phases of the experimental protocol. The main advantage gained in employing computer graphics over video footage is that students can virtually open the instrument, single out its components, and assemble it. Most importantly, it helps describe the processes occurring within the rock. These latter cannot be tracked while simply recording the physical experiment, but computer animation can efficiently illustrate what happens inside rock samples (e.g., describing acoustic waves, and/or fluid flow through a porous rock under pressure within an opaque core-holder - Figure 1). The repository of visualized experiments will complement lectures on laboratory techniques and constitute an on-line course offered through the EdX platform at Stanford. This will provide a virtual laboratory for anyone, anywhere to facilitate teaching/learning of introductory laboratory classes in Geophysics and expand the number of courses that can be offered for curricula in Earth Sciences. The primary goal is to open up a research laboratory such as the one available at Stanford to promising students worldwide who are currently left out of such educational resources.
ERIC Educational Resources Information Center
Porter, Lon A., Jr.; Chapman, Cole A.; Alaniz, Jacob A.
2017-01-01
In this work, a versatile and user-friendly selection of stereolithography (STL) files and computer-aided design (CAD) models are shared to assist educators and students in the production of simple and inexpensive 3D printed filter fluorometer instruments. These devices are effective resources for supporting active learners in the exploration of…
Software for Planning Scientific Activities on Mars
NASA Technical Reports Server (NTRS)
Ai-Chang, Mitchell; Bresina, John; Jonsson, Ari; Hsu, Jennifer; Kanefsky, Bob; Morris, Paul; Rajan, Kanna; Yglesias, Jeffrey; Charest, Len; Maldague, Pierre
2003-01-01
Mixed-Initiative Activity Plan Generator (MAPGEN) is a ground-based computer program for planning and scheduling the scientific activities of instrumented exploratory robotic vehicles, within the limitations of available resources onboard the vehicle. MAPGEN is a combination of two prior software systems: (1) an activity-planning program, APGEN, developed at NASA s Jet Propulsion Laboratory and (2) the Europa planner/scheduler from NASA Ames Research Center. MAPGEN performs all of the following functions: Automatic generation of plans and schedules for scientific and engineering activities; Testing of hypotheses (or what-if analyses of various scenarios); Editing of plans; Computation and analysis of resources; and Enforcement and maintenance of constraints, including resolution of temporal and resource conflicts among planned activities. MAPGEN can be used in either of two modes: one in which the planner/scheduler is turned off and only the basic APGEN functionality is utilized, or one in which both component programs are used to obtain the full planning, scheduling, and constraint-maintenance functionality.
Teaching physiology and the World Wide Web: electrochemistry and electrophysiology on the Internet.
Dwyer, T M; Fleming, J; Randall, J E; Coleman, T G
1997-12-01
Students seek active learning experiences that can rapidly impart relevant information in the most convenient way possible. Computer-assisted education can now use the resources of the World Wide Web to convey the important characteristics of events as elemental as the physical properties of osmotically active particles in the cell and as complex as the nerve action potential or the integrative behavior of the intact organism. We have designed laboratory exercises that introduce first-year medical students to membrane and action potentials, as well as the more complex example of integrative physiology, using the dynamic properties of computer simulations. Two specific examples are presented. The first presents the physical laws that apply to osmotic, chemical, and electrical gradients, leading to the development of the concept of membrane potentials; this module concludes with the simulation of the ability of the sodium-potassium pump to establish chemical gradients and maintain cell volume. The second module simulates the action potential according to the Hodgkin-Huxley model, illustrating the concepts of threshold, inactivation, refractory period, and accommodation. Students can access these resources during the scheduled laboratories or on their own time via our Web site on the Internet (http./(/)phys-main.umsmed.edu) by using the World Wide Web protocol. Accurate version control is possible because one valid, but easily edited, copy of the labs exists at the Web site. A common graphical interface is possible through the use of the Hypertext mark-up language. Platform independence is possible through the logical and arithmetic calculations inherent to graphical browsers and the Javascript computer language. The initial success of this program indicates that medical education can be very effective both by the use of accurate simulations and by the existence of a universally accessible Internet resource.
Computing through Scientific Abstractions in SysBioPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Stephan, Eric G.; Gracio, Deborah K.
2004-10-13
Today, biologists and bioinformaticists have a tremendous amount of computational power at their disposal. With the availability of supercomputers, burgeoning scientific databases and digital libraries such as GenBank and PubMed, and pervasive computational environments such as the Grid, biologists have access to a wealth of computational capabilities and scientific data at hand. Yet, the rapid development of computational technologies has far exceeded the typical biologist’s ability to effectively apply the technology in their research. Computational sciences research and development efforts such as the Biology Workbench, BioSPICE (Biological Simulation Program for Intra-Cellular Evaluation), and BioCoRE (Biological Collaborative Research Environment) are importantmore » in connecting biologists and their scientific problems to computational infrastructures. On the Computational Cell Environment and Heuristic Entity-Relationship Building Environment projects at the Pacific Northwest National Laboratory, we are jointly developing a new breed of scientific problem solving environment called SysBioPSE that will allow biologists to access and apply computational resources in the scientific research context. In contrast to other computational science environments, SysBioPSE operates as an abstraction layer above a computational infrastructure. The goal of SysBioPSE is to allow biologists to apply computational resources in the context of the scientific problems they are addressing and the scientific perspectives from which they conduct their research. More specifically, SysBioPSE allows biologists to capture and represent scientific concepts and theories and experimental processes, and to link these views to scientific applications, data repositories, and computer systems.« less
Symmetrically private information retrieval based on blind quantum computing
NASA Astrophysics Data System (ADS)
Sun, Zhiwei; Yu, Jianping; Wang, Ping; Xu, Lingling
2015-05-01
Universal blind quantum computation (UBQC) is a new secure quantum computing protocol which allows a user Alice who does not have any sophisticated quantum technology to delegate her computing to a server Bob without leaking any privacy. Using the features of UBQC, we propose a protocol to achieve symmetrically private information retrieval, which allows a quantum limited Alice to query an item from Bob with a fully fledged quantum computer; meanwhile, the privacy of both parties is preserved. The security of our protocol is based on the assumption that malicious Alice has no quantum computer, which avoids the impossibility proof of Lo. For the honest Alice, she is almost classical and only requires minimal quantum resources to carry out the proposed protocol. Therefore, she does not need any expensive laboratory which can maintain the coherence of complicated quantum experimental setups.
Earth System Grid II (ESG): Turning Climate Model Datasets Into Community Resources
NASA Astrophysics Data System (ADS)
Williams, D.; Middleton, D.; Foster, I.; Nevedova, V.; Kesselman, C.; Chervenak, A.; Bharathi, S.; Drach, B.; Cinquni, L.; Brown, D.; Strand, G.; Fox, P.; Garcia, J.; Bernholdte, D.; Chanchio, K.; Pouchard, L.; Chen, M.; Shoshani, A.; Sim, A.
2003-12-01
High-resolution, long-duration simulations performed with advanced DOE SciDAC/NCAR climate models will produce tens of petabytes of output. To be useful, this output must be made available to global change impacts researchers nationwide, both at national laboratories and at universities, other research laboratories, and other institutions. To this end, we propose to create a new Earth System Grid, ESG-II - a virtual collaborative environment that links distributed centers, users, models, and data. ESG-II will provide scientists with virtual proximity to the distributed data and resources that they require to perform their research. The creation of this environment will significantly increase the scientific productivity of U.S. climate researchers by turning climate datasets into community resources. In creating ESG-II, we will integrate and extend a range of Grid and collaboratory technologies, including the DODS remote access protocols for environmental data, Globus Toolkit technologies for authentication, resource discovery, and resource access, and Data Grid technologies developed in other projects. We will develop new technologies for (1) creating and operating "filtering servers" capable of performing sophisticated analyses, and (2) delivering results to users. In so doing, we will simultaneously contribute to climate science and advance the state of the art in collaboratory technology. We expect our results to be useful to numerous other DOE projects. The three-year R&D program will be undertaken by a talented and experienced team of computer scientists at five laboratories (ANL, LBNL, LLNL, NCAR, ORNL) and one university (ISI), working in close collaboration with climate scientists at several sites.
NASA Astrophysics Data System (ADS)
Perez, G. L.; Larour, E. Y.; Halkides, D. J.; Cheng, D. L. C.
2015-12-01
The Virtual Ice Sheet Laboratory(VISL) is a Cryosphere outreach effort byscientists at the Jet Propulsion Laboratory(JPL) in Pasadena, CA, Earth and SpaceResearch(ESR) in Seattle, WA, and the University of California at Irvine (UCI), with the goal of providing interactive lessons for K-12 and college level students,while conforming to STEM guidelines. At the core of VISL is the Ice Sheet System Model(ISSM), an open-source project developed jointlyat JPL and UCI whose main purpose is to model the evolution of the polar ice caps in Greenland and Antarctica. By using ISSM, VISL students have access tostate-of-the-art modeling software that is being used to conduct scientificresearch by users all over the world. However, providing this functionality isby no means simple. The modeling of ice sheets in response to sea and atmospheric temperatures, among many other possible parameters, requiressignificant computational resources. Furthermore, this service needs to beresponsive and capable of handling burst requests produced by classrooms ofstudents. Cloud computing providers represent a burgeoning industry. With majorinvestments by tech giants like Amazon, Google and Microsoft, it has never beeneasier or more affordable to deploy computational elements on-demand. This isexactly what VISL needs and ISSM is capable of. Moreover, this is a promisingalternative to investing in expensive and rapidly devaluing hardware.
Research in remote sensing of agriculture, earth resources, and man's environment
NASA Technical Reports Server (NTRS)
Landgrebe, D. A.
1974-01-01
Research performed on NASA and USDA remote sensing projects are reviewed and include: (1) the 1971 Corn Blight Watch Experiment; (2) crop identification; (3) soil mapping; (4) land use inventories; (5) geologic mapping; and (6) forest and water resources data collection. The extent to which ERTS images and airborne data were used is indicated along with computer implementation. A field and laboratory spectroradiometer system is described together with the LARSYS software system, both of which were widely used during the research. Abstracts are included of 160 technical reports published as a result of the work.
Integrating Information Technologies Into Large Organizations
NASA Technical Reports Server (NTRS)
Gottlich, Gretchen; Meyer, John M.; Nelson, Michael L.; Bianco, David J.
1997-01-01
NASA Langley Research Center's product is aerospace research information. To this end, Langley uses information technology tools in three distinct ways. First, information technology tools are used in the production of information via computation, analysis, data collection and reduction. Second, information technology tools assist in streamlining business processes, particularly those that are primarily communication based. By applying these information tools to administrative activities, Langley spends fewer resources on managing itself and can allocate more resources for research. Third, Langley uses information technology tools to disseminate its aerospace research information, resulting in faster turn around time from the laboratory to the end-customer.
e-Science and data management resources on the Web.
Gore, Sally A
2011-01-01
The way research is conducted has changed over time, from simple experiments to computer modeling and simulation, from individuals working in isolated laboratories to global networks of researchers collaborating on a single topic. Often, this new paradigm results in the generation of staggering amounts of data. The intensive use of data and the existence of networks of researchers characterize e-Science. The role of libraries and librarians in e-Science has been a topic of interest for some time now. This column looks at tools, resources, and projects that demonstrate successful collaborations between libraries and researchers in e-Science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Kelly K.; Zavala-Zraiza, Daniel
Here, we summarize an effort to develop a global oil and gas infrastructure (GOGI) taxonomy and geodatabase, using a combination of big data computing, custom search and data integration algorithms, and expert driven spatio-temporal analytics to identify, access, and evaluate open oil and gas data resources and uncertainty trends worldwide. This approach leveraged custom National Energy Technology Laboratory (NETL) tools and capabilities in collaboration with Environmental Defense Fund (EDF) and Carbon Limits subject matter expertise, to identify over 380 datasets and integrate more than 4.8 million features into the GOGI database. In addition to acquisition of open oil and gasmore » infrastructure data, information was collected and analyzed to assess the spatial, temporal, and source quality of these resources, and estimate their completeness relative to the top 40 hydrocarbon producing and consuming countries.« less
The Legnaro-Padova distributed Tier-2: challenges and results
NASA Astrophysics Data System (ADS)
Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola
2014-06-01
The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to the Tier-2 operations team. Finally we discuss about the foreseen developments of the existing infrastructure. This includes in particular the evolution from a Grid-based resource towards a Cloud-based computing facility.
Navy Manpower Planning and Programming: Basis for Systems Examination
1974-10-01
IRE5EARCH AND DEVEl. INAVAL RESEARCH] CHIEF OF NAVAL OPERATIONS OFFICE CHIIf OF NAVAL OPERATIONS NAVAL MATERIAL COMMAND •LitMARTERS NAVAL MATERIAL...DIVISION COMPENSATION BRANCH MANPOWER PROGRAMMING ■RANCH JOURNAL/TRADE TALK BRANCH 06A ASSISTANT FOR COMPUTER SCIENCES SYSTEMS DEVELOPMENT BRANCH...Assistant Director, Life Sciences , Air Force Office of Scientific Research Technical Library, Air Force Human Resources Laboratory, Lackland Air Force Base
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasheninnikov, Sergei I.; Angus, Justin; Lee, Wonjae
The goal of the Edge Simulation Laboratory (ESL) multi-institutional project is to advance scientific understanding of the edge plasma region of magnetic fusion devices via a coordinated effort utilizing modern computing resources, advanced algorithms, and ongoing theoretical development. The UCSD team was involved in the development of the COGENT code for kinetic studies across a magnetic separatrix. This work included a kinetic treatment of electrons and multiple ion species (impurities) and accurate collision operators.
SECURITY MODELING FOR MARITIME PORT DEFENSE RESOURCE ALLOCATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.; Dunn, D.
2010-09-07
Redeployment of existing law enforcement resources and optimal use of geographic terrain are examined for countering the threat of a maritime based small-vessel radiological or nuclear attack. The evaluation was based on modeling conducted by the Savannah River National Laboratory that involved the development of options for defensive resource allocation that can reduce the risk of a maritime based radiological or nuclear threat. A diverse range of potential attack scenarios has been assessed. As a result of identifying vulnerable pathways, effective countermeasures can be deployed using current resources. The modeling involved the use of the Automated Vulnerability Evaluation for Risksmore » of Terrorism (AVERT{reg_sign}) software to conduct computer based simulation modeling. The models provided estimates for the probability of encountering an adversary based on allocated resources including response boats, patrol boats and helicopters over various environmental conditions including day, night, rough seas and various traffic flow rates.« less
Threaded cognition: an integrated theory of concurrent multitasking.
Salvucci, Dario D; Taatgen, Niels A
2008-01-01
The authors propose the idea of threaded cognition, an integrated theory of concurrent multitasking--that is, performing 2 or more tasks at once. Threaded cognition posits that streams of thought can be represented as threads of processing coordinated by a serial procedural resource and executed across other available resources (e.g., perceptual and motor resources). The theory specifies a parsimonious mechanism that allows for concurrent execution, resource acquisition, and resolution of resource conflicts, without the need for specialized executive processes. By instantiating this mechanism as a computational model, threaded cognition provides explicit predictions of how multitasking behavior can result in interference, or lack thereof, for a given set of tasks. The authors illustrate the theory in model simulations of several representative domains ranging from simple laboratory tasks such as dual-choice tasks to complex real-world domains such as driving and driver distraction. (c) 2008 APA, all rights reserved
HOMER Economic Models - US Navy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Jason William; Myers, Kurt Steven
This LETTER REPORT has been prepared by Idaho National Laboratory for US Navy NAVFAC EXWC to support in testing pre-commercial SIREN (Simulated Integration of Renewable Energy Networks) computer software models. In the logistics mode SIREN software simulates the combination of renewable power sources (solar arrays, wind turbines, and energy storage systems) in supplying an electrical demand. NAVFAC EXWC will create SIREN software logistics models of existing or planned renewable energy projects at five Navy locations (San Nicolas Island, AUTEC, New London, & China Lake), and INL will deliver additional HOMER computer models for comparative analysis. In the transient mode SIRENmore » simulates the short time-scale variation of electrical parameters when a power outage or other destabilizing event occurs. In the HOMER model, a variety of inputs are entered such as location coordinates, Generators, PV arrays, Wind Turbines, Batteries, Converters, Grid costs/usage, Solar resources, Wind resources, Temperatures, Fuels, and Electric Loads. HOMER's optimization and sensitivity analysis algorithms then evaluate the economic and technical feasibility of these technology options and account for variations in technology costs, electric load, and energy resource availability. The Navy can then use HOMER’s optimization and sensitivity results to compare to those of the SIREN model. The U.S. Department of Energy (DOE) Idaho National Laboratory (INL) possesses unique expertise and experience in the software, hardware, and systems design for the integration of renewable energy into the electrical grid. NAVFAC EXWC will draw upon this expertise to complete mission requirements.« less
Choice: 36 band feature selection software with applications to multispectral pattern recognition
NASA Technical Reports Server (NTRS)
Jones, W. C.
1973-01-01
Feature selection software was developed at the Earth Resources Laboratory that is capable of inputting up to 36 channels and selecting channel subsets according to several criteria based on divergence. One of the criterion used is compatible with the table look-up classifier requirements. The software indicates which channel subset best separates (based on average divergence) each class from all other classes. The software employs an exhaustive search technique, and computer time is not prohibitive. A typical task to select the best 4 of 22 channels for 12 classes takes 9 minutes on a Univac 1108 computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharya, Papri; Prokopchuk, Demyan E.; Mock, Michael T.
2017-03-01
This review examines the synthesis and acid reactivity of transition metal dinitrogen complexes bearing diphosphine ligands containing pendant amine groups in the second coordination sphere. This manuscript is a review of the work performed in the Center for Molecular Electrocatalysis. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences. EPR studies on Fe were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located atmore » PNNL. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific Northwest National Laboratory is operated by Battelle for the U.S. DOE.« less
Experiences in Automated Calibration of a Nickel Equation of State
NASA Astrophysics Data System (ADS)
Carpenter, John H.
2017-06-01
Wide availability of large computers has led to increasing incorporation of computational data, such as from density functional theory molecular dynamics, in the development of equation of state (EOS) models. Once a grid of computational data is available, it is usually left to an expert modeler to model the EOS using traditional techniques. One can envision the possibility of using the increasing computing resources to perform black-box calibration of EOS models, with the goal of reducing the workload on the modeler or enabling non-experts to generate good EOSs with such a tool. Progress towards building such a black-box calibration tool will be explored in the context of developing a new, wide-range EOS for nickel. While some details of the model and data will be shared, the focus will be on what was learned by automatically calibrating the model in a black-box method. Model choices and ensuring physicality will also be discussed. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Recent Performance Results of VPIC on Trinity
NASA Astrophysics Data System (ADS)
Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Le, A.; Li, H.; Nam, H.; Pang, X.; Stark, D. J.; Rust, W. N., III; Yin, L.; Albright, B. J.
2017-10-01
Trinity is a new DOE compute resource now in production at Los Alamos National Laboratory. Trinity has several new and unique features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes, use of on package high bandwidth memory (HBM) for KNL nodes, ability to configure KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to optimize VPIC on Trinity by taking advantage of these new architectural features. Results of work will be presented on performance of VPIC on Haswell and KNL partitions for single node runs and runs at scale. Results include use of burst buffers at scale to optimize I/O, comparison of strategies for using MPI and threads, performance benefits using HBM and effectiveness of using intrinsics for vectorization. Work performed under auspices of U.S. Dept. of Energy by Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by LANL LDRD program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shipman, Galen M.
These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematicmore » approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.« less
Development of Fuel Shuffling Module for PHISICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allan Mabe; Andrea Alfonsi; Cristian Rabiti
2013-06-01
PHISICS (Parallel and Highly Innovative Simulation for the INL Code System) [4] code toolkit has been in development at the Idaho National Laboratory. This package is intended to provide a modern analysis tool for reactor physics investigation. It is designed with the mindset to maximize accuracy for a given availability of computational resources and to give state of the art tools to the modern nuclear engineer. This is obtained by implementing several different algorithms and meshing approaches among which the user will be able to choose, in order to optimize his computational resources and accuracy needs. The software is completelymore » modular in order to simplify the independent development of modules by different teams and future maintenance. The package is coupled with the thermo-hydraulic code RELAP5-3D [3]. In the following the structure of the different PHISICS modules is briefly recalled, focusing on the new shuffling module (SHUFFLE), object of this paper.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, C. V.; Mendez, A. J.
This was a collaborative effort between Lawrence Livermore National Security, LLC (formerly The Regents of the University of California)/Lawrence Livermore National Laboratory (LLNL) and Mendez R & D Associates (MRDA) to develop and demonstrate a reconfigurable and cost effective design for optical code division multiplexing (O-CDM) with high spectral efficiency and throughput, as applied to the field of distributed computing, including multiple accessing (sharing of communication resources) and bidirectional data distribution in fiber-to-the-premise (FTTx) networks.
R&D100: Lightweight Distributed Metric Service
Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike
2018-06-12
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
R&D100: Lightweight Distributed Metric Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann; Brandt, Jim; Tucker, Tom
2015-11-19
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
PandASoft: Open Source Instructional Laboratory Administration Software
NASA Astrophysics Data System (ADS)
Gay, P. L.; Braasch, P.; Synkova, Y. N.
2004-12-01
PandASoft (Physics and Astronomy Software) is software for organizing and archiving a department's teaching resources and materials. An easy to use, secure interface allows faculty and staff to explore equipment inventories, see what laboratory experiments are available, find handouts, and track what has been used in different classes in the past. Divided into five sections: classes, equipment, laboratories, links, and media, its database cross links materials, allowing users to see what labs are used with which classes, what media and equipment are used with which labs, or simply what equipment is lurking in which room. Written in PHP and MySQL, this software can be installed on any UNIX / Linux platform, including Macintosh OS X. It is designed to allow users to easily customize the headers, footers and colors to blend with existing sites - no programming experience required. While initial data input is labor intensive, the system will save time later by allowing users to quickly answer questions related to what is in inventory, where it is located, how many are in stock, and where online they can learn more. It will also provide a central location for storing PDFs of handouts, and links to applets and cool sites at other universities. PandASoft comes with over 100 links to online resources pre-installed. We would like to thank Dr. Wolfgang Rueckner and the Harvard University Science Center for providing computers and resources for this project.
Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, Martin J.
This project was part of a coordinated software development effort which the nuclear physics lattice QCD community pursues in order to ensure that lattice calculations can make optimal use of present, and forthcoming leadership-class and dedicated hardware, including those of the national laboratories, and prepares for the exploitation of future computational resources in the exascale era. The UW team improved and extended software libraries used in lattice QCD calculations related to multi-nucleon systems, enhanced production running codes related to load balancing multi-nucleon production on large-scale computing platforms, and developed SQLite (addressable database) interfaces to efficiently archive and analyze multi-nucleon datamore » and developed a Mathematica interface for the SQLite databases.« less
Psychiatrists’ Comfort Using Computers and Other Electronic Devices in Clinical Practice
Fochtmann, Laura J.; Clarke, Diana E.; Barber, Keila; Hong, Seung-Hee; Yager, Joel; Mościcki, Eve K.; Plovnick, Robert M.
2015-01-01
This report highlights findings from the Study of Psychiatrists’ Use of Informational Resources in Clinical Practice, a cross-sectional Web- and paper-based survey that examined psychiatrists’ comfort using computers and other electronic devices in clinical practice. One-thousand psychiatrists were randomly selected from the American Medical Association Physician Masterfile and asked to complete the survey between May and August, 2012. A total of 152 eligible psychiatrists completed the questionnaire (response rate 22.2 %). The majority of psychiatrists reported comfort using computers for educational and personal purposes. However, 26 % of psychiatrists reported not using or not being comfortable using computers for clinical functions. Psychiatrists under age 50 were more likely to report comfort using computers for all purposes than their older counterparts. Clinical tasks for which computers were reportedly used comfortably, specifically by psychiatrists younger than 50, included documenting clinical encounters, prescribing, ordering laboratory tests, accessing read-only patient information (e.g., test results), conducting internet searches for general clinical information, accessing online patient educational materials, and communicating with patients or other clinicians. Psychiatrists generally reported comfort using computers for personal and educational purposes. However, use of computers in clinical care was less common, particularly among psychiatrists 50 and older. Information and educational resources need to be available in a variety of accessible, user-friendly, computer and non-computer-based formats, to support use across all ages. Moreover, ongoing training and technical assistance with use of electronic and mobile device technologies in clinical practice is needed. Research on barriers to clinical use of computers is warranted. PMID:26667248
Psychiatrists' Comfort Using Computers and Other Electronic Devices in Clinical Practice.
Duffy, Farifteh F; Fochtmann, Laura J; Clarke, Diana E; Barber, Keila; Hong, Seung-Hee; Yager, Joel; Mościcki, Eve K; Plovnick, Robert M
2016-09-01
This report highlights findings from the Study of Psychiatrists' Use of Informational Resources in Clinical Practice, a cross-sectional Web- and paper-based survey that examined psychiatrists' comfort using computers and other electronic devices in clinical practice. One-thousand psychiatrists were randomly selected from the American Medical Association Physician Masterfile and asked to complete the survey between May and August, 2012. A total of 152 eligible psychiatrists completed the questionnaire (response rate 22.2 %). The majority of psychiatrists reported comfort using computers for educational and personal purposes. However, 26 % of psychiatrists reported not using or not being comfortable using computers for clinical functions. Psychiatrists under age 50 were more likely to report comfort using computers for all purposes than their older counterparts. Clinical tasks for which computers were reportedly used comfortably, specifically by psychiatrists younger than 50, included documenting clinical encounters, prescribing, ordering laboratory tests, accessing read-only patient information (e.g., test results), conducting internet searches for general clinical information, accessing online patient educational materials, and communicating with patients or other clinicians. Psychiatrists generally reported comfort using computers for personal and educational purposes. However, use of computers in clinical care was less common, particularly among psychiatrists 50 and older. Information and educational resources need to be available in a variety of accessible, user-friendly, computer and non-computer-based formats, to support use across all ages. Moreover, ongoing training and technical assistance with use of electronic and mobile device technologies in clinical practice is needed. Research on barriers to clinical use of computers is warranted.
FY04 Engineering Technology Reports Laboratory Directed Research and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharpe, R M
2005-01-27
This report summarizes the science and technology research and development efforts in Lawrence Livermore National Laboratory's Engineering Directorate for FY2004, and exemplifies Engineering's more than 50-year history of developing the technologies needed to support the Laboratory's missions. Engineering has been a partner in every major program and project at the Laboratory throughout its existence and has prepared for this role with a skilled workforce and the technical resources developed through venues like the Laboratory Directed Research and Development Program (LDRD). This accomplishment is well summarized by Engineering's mission: ''Enable program success today and ensure the Laboratory's vitality tomorrow''. Engineering's investmentmore » in technologies is carried out through two programs, the ''Tech Base'' program and the LDRD program. LDRD is the vehicle for creating those technologies and competencies that are cutting edge. These require a significant level of research or contain some unknown that needs to be fully understood. Tech Base is used to apply technologies to a Laboratory need. The term commonly used for Tech Base projects is ''reduction to practice''. Therefore, the LDRD report covered here has a strong research emphasis. Areas that are presented all fall into those needed to accomplish our mission. For FY2004, Engineering's LDRD projects were focused on mesoscale target fabrication and characterization, development of engineering computational capability, material studies and modeling, remote sensing and communications, and microtechnology and nanotechnology for national security applications. Engineering's five Centers, in partnership with the Division Leaders and Department Heads, are responsible for guiding the long-term science and technology investments for the Directorate. The Centers represent technologies that have been identified as critical for the present and future work of the Laboratory, and are chartered to develop their respective areas. Their LDRD projects are the key resources to attain this competency, and, as such, nearly all of Engineering's portfolio falls under one of the five Centers. The Centers and their Directors are: (1) Center for Computational Engineering: Robert M. Sharpe; (2) Center for Microtechnology and Nanotechnology: Raymond P. Mariella, Jr.; (3) Center for Nondestructive Characterization: Harry E. Martz, Jr.; (4) Center for Precision Engineering: Keith Carlisle; and (5) Center for Complex Distributed Systems: Gregory J. Suski, Acting Director.« less
NASA Technical Reports Server (NTRS)
Joyce, A. T.
1974-01-01
Significant progress has been made in the classification of surface conditions (land uses) with computer-implemented techniques based on the use of ERTS digital data and pattern recognition software. The supervised technique presently used at the NASA Earth Resources Laboratory is based on maximum likelihood ratioing with a digital table look-up approach to classification. After classification, colors are assigned to the various surface conditions (land uses) classified, and the color-coded classification is film recorded on either positive or negative 9 1/2 in. film at the scale desired. Prints of the film strips are then mosaicked and photographed to produce a land use map in the format desired. Computer extraction of statistical information is performed to show the extent of each surface condition (land use) within any given land unit that can be identified in the image. Evaluations of the product indicate that classification accuracy is well within the limits for use by land resource managers and administrators. Classifications performed with digital data acquired during different seasons indicate that the combination of two or more classifications offer even better accuracy.
Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud.
Cianfrocco, Michael A; Leschziner, Andres E
2015-05-08
The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available 'off-the-shelf' computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16-480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM.
Instructional computing in space physics moves ahead
NASA Astrophysics Data System (ADS)
Russell, C. T.; Omidi, N.
As the number of spacecraft stationed in the Earth's magnetosphere exponentiates and society becomes more technologically sophisticated and dependent on these spacebased resources, both the importance of space physics and the need to train people in this field will increase.Space physics is a very difficult subject for students to master. Both mechanical and electromagnetic forces are important. The treatment of problems can be very mathematical, and the scale sizes of phenomena are usually such that laboratory studies become impossible, and experimentation, when possible at all, must be carried out in deep space. Fortunately, computers have evolved to the point that they are able to greatly facilitate instruction in space physics.
1982-08-01
though the two groups were different in terms of SC!I scientific interests and academic orientation scores (the aviation supply sample scored higher on...51 Chemists/Physicists 50 MARINE OFFICERS- COMUNICATION 49 MARINE OFFICERS-DATA SYSTEMS 48 Engineers 47 Biologists 46 Systems Analysts/Computer...Base ( Scientific and Technical Information Office) Commander, Air Force Human Resources Laboratory, Lowry Air Force Base (Technical Training Branch
Systems Engineering Building Advances Power Grid Research
Virden, Jud; Huang, Henry; Skare, Paul; Dagle, Jeff; Imhoff, Carl; Stoustrup, Jakob; Melton, Ron; Stiles, Dennis; Pratt, Rob
2018-01-16
Researchers and industry are now better equipped to tackle the nationâs most pressing energy challenges through PNNLâs new Systems Engineering Building â including challenges in grid modernization, buildings efficiency and renewable energy integration. This lab links real-time grid data, software platforms, specialized laboratories and advanced computing resources for the design and demonstration of new tools to modernize the grid and increase buildings energy efficiency.
2000-01-01
One Is Best? 8 Meet Your Customer Assistance Team 20 ERDC MSRC Computer Systems Are Gems 25 ERDC MSRC Contributions to SC99 15 2 ERDC MSRC The... Customer Assistance Center (CAC) for several years and has achieved a reputation as an onsite Kerberos and portable batch system (PBS) expert within...Laboratory to send its mobile unit to obtain blood donations for a young woman in the Vicksburg community who was in desperate need of blood in January
Local Aqueous Solvation Structure Around Ca2+ During Ca2+---Cl– Pair Formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baer, Marcel D.; Mundy, Christopher J.
2016-03-03
The molecular details of single ion solvation around Ca2+ and ion-pairing of Ca2--Cl- are investigated using ab initio molecular dynamics. The use of empirical dispersion corrections to the BLYP functional are investigated by comparison to experimentally available extended X-ray absorption fine structure (EXAFS) measurements, which probes the first solvation shell in great detail. Besides finding differences in the free-energy for both ion-pairing and the coordination number of ion solvation between the quantum and classical descriptions of interaction, there were important differences found between dispersion corrected and uncorrected density functional theory (DFT). Specifically, we show significantly different free-energy landscapes for bothmore » coordination number of Ca2+ and its ion-pairing with Cl- depending on the DFT simulation protocol. Our findings produce a self-consistent treatment of short-range solvent response to the ion and the intermediate to long-range collective response of the electrostatics of the ion-ion interaction to produce a detailed picture of ion-pairing that is consistent with experiment. MDB is supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative at Pacific Northwest National Laboratory. It was conducted under the Laboratory Directed Research and Development Program at PNNL, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. CJM acknowledges support from US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Additional computing resources were generously allocated by PNNL's Institutional Computing program. The authors thank Prof. Tom Beck for discussions regarding QCT, and Drs. Greg Schenter and Shawn Kathmann for insightful comments.« less
NASA Astrophysics Data System (ADS)
Bader, D. C.
2015-12-01
The Accelerated Climate Modeling for Energy (ACME) Project is concluding its first year. Supported by the Office of Science in the U.S. Department of Energy (DOE), its vision is to be "an ongoing, state-of-the-science Earth system modeling, modeling simulation and prediction project that optimizes the use of DOE laboratory resources to meet the science needs of the nation and the mission needs of DOE." Included in the "laboratory resources," is a large investment in computational, network and information technologies that will be utilized to both build better and more accurate climate models and broadly disseminate the data they generate. Current model diagnostic analysis and data dissemination technologies will not scale to the size of the simulations and the complexity of the models envisioned by ACME and other top tier international modeling centers. In this talk, the ACME Workflow component plans to meet these future needs will be described and early implementation examples will be highlighted.
Liu, Gang; Neelamegham, Sriram
2015-01-01
The glycome constitutes the entire complement of free carbohydrates and glycoconjugates expressed on whole cells or tissues. ‘Systems Glycobiology’ is an emerging discipline that aims to quantitatively describe and analyse the glycome. Here, instead of developing a detailed understanding of single biochemical processes, a combination of computational and experimental tools are used to seek an integrated or ‘systems-level’ view. This can explain how multiple biochemical reactions and transport processes interact with each other to control glycome biosynthesis and function. Computational methods in this field commonly build in silico reaction network models to describe experimental data derived from structural studies that measure cell-surface glycan distribution. While considerable progress has been made, several challenges remain due to the complex and heterogeneous nature of this post-translational modification. First, for the in silico models to be standardized and shared among laboratories, it is necessary to integrate glycan structure information and glycosylation-related enzyme definitions into the mathematical models. Second, as glycoinformatics resources grow, it would be attractive to utilize ‘Big Data’ stored in these repositories for model construction and validation. Third, while the technology for profiling the glycome at the whole-cell level has been standardized, there is a need to integrate mass spectrometry derived site-specific glycosylation data into the models. The current review discusses progress that is being made to resolve the above bottlenecks. The focus is on how computational models can bridge the gap between ‘data’ generated in wet-laboratory studies with ‘knowledge’ that can enhance our understanding of the glycome. PMID:25871730
Computerized provider order entry in the clinical laboratory
Baron, Jason M.; Dighe, Anand S.
2011-01-01
Clinicians have traditionally ordered laboratory tests using paper-based orders and requisitions. However, paper orders are becoming increasingly incompatible with the complexities, challenges, and resource constraints of our modern healthcare systems and are being replaced by electronic order entry systems. Electronic systems that allow direct provider input of diagnostic testing or medication orders into a computer system are known as Computerized Provider Order Entry (CPOE) systems. Adoption of laboratory CPOE systems may offer institutions many benefits, including reduced test turnaround time, improved test utilization, and better adherence to practice guidelines. In this review, we outline the functionality of various CPOE implementations, review the reported benefits, and discuss strategies for using CPOE to improve the test ordering process. Further, we discuss barriers to the implementation of CPOE systems that have prevented their more widespread adoption. PMID:21886891
NASA Astrophysics Data System (ADS)
Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.
2012-12-01
The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.
Hunt, Sevgin; Cimino, James J.; Koziol, Deloris E.
2013-01-01
Objective: The research studied whether a clinician's preference for online health knowledge resources varied with the use of two applications that were designed for information retrieval in an academic hospital setting. Methods: The researchers analyzed a year's worth of computer log files to study differences in the ways that four clinician groups (attending physicians, housestaff physicians, nurse practitioners, and nurses) sought information using two types of information retrieval applications (health resource links or Infobutton icons) across nine resources while they reviewed patients' laboratory results. Results: From a set of 14,979 observations, the authors found statistically significant differences among the 4 clinician groups for accessing resources using the health resources application (P<0.001) but not for the Infobuttons application (P = 0.31). For the health resources application, the preferences of the 4 clinical groups varied according to the specific resources examined (all P≤0.02). Conclusion: The information-seeking behavior of clinicians may vary in relation to their role and the way in which the information is presented. Studying these behaviors can provide valuable insights to those tasked with maintaining information retrieval systems' links to appropriate online knowledge resources. PMID:23405044
Hunt, Sevgin; Cimino, James J; Koziol, Deloris E
2013-01-01
The research studied whether a clinician's preference for online health knowledge resources varied with the use of two applications that were designed for information retrieval in an academic hospital setting. The researchers analyzed a year's worth of computer log files to study differences in the ways that four clinician groups (attending physicians, housestaff physicians, nurse practitioners, and nurses) sought information using two types of information retrieval applications (health resource links or Infobutton icons) across nine resources while they reviewed patients' laboratory results. From a set of 14,979 observations, the authors found statistically significant differences among the 4 clinician groups for accessing resources using the health resources application (P<0.001) but not for the Infobuttons application (P = 0.31). For the health resources application, the preferences of the 4 clinical groups varied according to the specific resources examined (all P≤0.02). The information-seeking behavior of clinicians may vary in relation to their role and the way in which the information is presented. Studying these behaviors can provide valuable insights to those tasked with maintaining information retrieval systems' links to appropriate online knowledge resources.
Public census data on CD-ROM at Lawrence Berkeley Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, D.W.
The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socio-economic and geographic data files which are available to CEDR and PAREP collaborators via LBL`s computing network. At this time 70 CD-ROM diskettes (approximately 36 gigabytes) are on line via the Unix file server cedrcd. lbl. gov. Most of the files are from the US Bureau of the Census, and most pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the formmore » of ASCII text files. Printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), or the UC Documents Library. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user`s application program(s).« less
Public census data on CD-ROM at Lawrence Berkeley Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, D.W.
The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socio-economic and geographic data files which are available to CEDR and PAREP collaborators via LBL's computing network. At this time 70 CD-ROM diskettes (approximately 36 gigabytes) are on line via the Unix file server cedrcd. lbl. gov. Most of the files are from the US Bureau of the Census, and most pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the formmore » of ASCII text files. Printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), or the UC Documents Library. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user's application program(s).« less
Measurement-based quantum teleportation on finite AKLT chains
NASA Astrophysics Data System (ADS)
Fujii, Akihiko; Feder, David
In the measurement-based model of quantum computation, universal quantum operations are effected by making repeated local measurements on resource states which contain suitable entanglement. Resource states include two-dimensional cluster states and the ground state of the Affleck-Kennedy-Lieb-Tasaki (AKLT) state on the honeycomb lattice. Recent studies suggest that measurements on one-dimensional systems in the Haldane phase teleport perfect single-qubit gates in the correlation space, protected by the underlying symmetry. As laboratory realizations of symmetry-protected states will necessarily be finite, we investigate the potential for quantum gate teleportation in finite chains of a bilinear-biquadratic Hamiltonian which is a generalization of the AKLT model representing the full Haldane phase.
Strengthening laboratory systems in resource-limited settings.
Olmsted, Stuart S; Moore, Melinda; Meili, Robin C; Duber, Herbert C; Wasserman, Jeffrey; Sama, Preethi; Mundell, Ben; Hilborne, Lee H
2010-09-01
Considerable resources have been invested in recent years to improve laboratory systems in resource-limited settings. We reviewed published reports, interviewed major donor organizations, and conducted case studies of laboratory systems in 3 countries to assess how countries and donors have worked together to improve laboratory services. While infrastructure and the provision of services have seen improvement, important opportunities remain for further advancement. Implementation of national laboratory plans is inconsistent, human resources are limited, and quality laboratory services rarely extend to lower tier laboratories (eg, health clinics, district hospitals). Coordination within, between, and among governments and donor organizations is also frequently problematic. Laboratory standardization and quality control are improving but remain challenging, making accreditation a difficult goal. Host country governments and their external funding partners should coordinate their efforts effectively around a host country's own national laboratory plan to advance sustainable capacity development throughout a country's laboratory system.
Roles of laboratories and laboratory systems in effective tuberculosis programmes.
Ridderhof, John C; van Deun, Armand; Kam, Kai Man; Narayanan, P R; Aziz, Mohamed Abdul
2007-05-01
Laboratories and laboratory networks are a fundamental component of tuberculosis (TB) control, providing testing for diagnosis, surveillance and treatment monitoring at every level of the health-care system. New initiatives and resources to strengthen laboratory capacity and implement rapid and new diagnostic tests for TB will require recognition that laboratories are systems that require quality standards, appropriate human resources, and attention to safety in addition to supplies and equipment. To prepare the laboratory networks for new diagnostics and expanded capacity, we need to focus efforts on strengthening quality management systems (QMS) through additional resources for external quality assessment programmes for microscopy, culture, drug susceptibility testing (DST) and molecular diagnostics. QMS should also promote development of accreditation programmes to ensure adherence to standards to improve both the quality and credibility of the laboratory system within TB programmes. Corresponding attention must be given to addressing human resources at every level of the laboratory, with special consideration being given to new programmes for laboratory management and leadership skills. Strengthening laboratory networks will also involve setting up partnerships between TB programmes and those seeking to control other diseases in order to pool resources and to promote advocacy for quality standards, to develop strategies to integrate laboratories functions and to extend control programme activities to the private sector. Improving the laboratory system will assure that increased resources, in the form of supplies, equipment and facilities, will be invested in networks that are capable of providing effective testing to meet the goals of the Global Plan to Stop TB.
Laboratory directed research and development program FY 1999
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Todd; Levy, Karin
2000-03-08
The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab or LBNL) is a multi-program national research facility operated by the University of California for the Department of Energy (DOE). As an integral element of DOE's National Laboratory System, Berkeley Lab supports DOE's missions in fundamental science, energy resources, and environmental quality. Berkeley Lab programs advance four distinct goals for DOE and the nation: (1) To perform leading multidisciplinary research in the computing sciences, physical sciences, energy sciences, biosciences, and general sciences in a manner that ensures employee and public safety and protection of the environment. (2) To develop and operatemore » unique national experimental facilities for qualified investigators. (3) To educate and train future generations of scientists and engineers to promote national science and education goals. (4) To transfer knowledge and technological innovations and to foster productive relationships among Berkeley Lab's research programs, universities, and industry in order to promote national economic competitiveness. This is the annual report on Laboratory Directed Research and Development (LDRD) program for FY99.« less
Birx, Deborah; de Souza, Mark; Nkengasong, John N
2009-06-01
Strengthening national health laboratory systems in resource-poor countries is critical to meeting the United Nations Millennium Development Goals. Despite strong commitment from the international community to fight major infectious diseases, weak laboratory infrastructure remains a huge rate-limiting step. Some major challenges facing laboratory systems in resource-poor settings include dilapidated infrastructure; lack of human capacity, laboratory policies, and strategic plans; and limited synergies between clinical and research laboratories. Together, these factors compromise the quality of test results and impact patient management. With increased funding, the target of laboratory strengthening efforts in resource-poor countries should be the integrating of laboratory services across major diseases to leverage resources with respect to physical infrastructure; types of assays; supply chain management of reagents and equipment; and maintenance of equipment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bland, Arthur S Buddy; Hack, James J; Baker, Ann E
Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energymore » assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.« less
JINR cloud infrastructure evolution
NASA Astrophysics Data System (ADS)
Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.
2016-09-01
To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.
National Laboratory for Advanced Scientific Visualization at UNAM - Mexico
NASA Astrophysics Data System (ADS)
Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo
2016-04-01
In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires large quantity of memory as well as large and fast parallel storage systems. The entire system temperature is controlled by an energy and space efficient cooling solution, based on large rear door liquid cooled heat exchangers. This state-of-the-art infrastructure will boost research activities in the region, offer a powerful scientific tool for teaching at undergraduate and graduate levels, and enhance association and cooperation with business-oriented organizations.
ELSI Bibliography: Ethical legal and social implications of the Human Genome Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yesley, M.S.
This second edition of the ELSI Bibliography provides a current and comprehensive resource for identifying publications on the major topics related to the ethical, legal and social issues (ELSI) of the Human Genome Project. Since the first edition of the ELSI Bibliography was printed last year, new publications and earlier ones identified by additional searching have doubled our computer database of ELSI publications to over 5600 entries. The second edition of the ELSI Bibliography reflects this growth of the underlying computer database. Researchers should note that an extensive collection of publications in the database is available for public use atmore » the General Law Library of Los Alamos National Laboratory (LANL).« less
Industry and Academic Consortium for Computer Based Subsurface Geology Laboratory
NASA Astrophysics Data System (ADS)
Brown, A. L.; Nunn, J. A.; Sears, S. O.
2008-12-01
Twenty two licenses for Petrel Software acquired through a grant from Schlumberger are being used to redesign the laboratory portion of Subsurface Geology at Louisiana State University. The course redesign is a cooperative effort between LSU's Geology and Geophysics and Petroleum Engineering Departments and Schlumberger's Technical Training Division. In spring 2008, two laboratory sections were taught with 22 students in each section. The class contained geology majors, petroleum engineering majors, and geology graduate students. Limited enrollments and 3 hour labs make it possible to incorporate hands-on visualization, animation, manipulation of data and images, and access to geological data available online. 24/7 access to the laboratory and step by step instructions for Petrel exercises strongly promoted peer instruction and individual learning. Goals of the course redesign include: enhancing visualization of earth materials; strengthening student's ability to acquire, manage, and interpret multifaceted geological information; fostering critical thinking, the scientific method; improving student communication skills; providing cross training between geologists and engineers and increasing the quantity, quality, and diversity of students pursuing Earth Science and Petroleum Engineering careers. IT resources available in the laboratory provide students with sophisticated visualization tools, allowing them to switch between 2-D and 3-D reconstructions more seamlessly, and enabling them to manipulate larger integrated data-sets, thus permitting more time for critical thinking and hypothesis testing. IT resources also enable faculty and students to simultaneously work with the software to visually interrogate a 3D data set and immediately test hypothesis formulated in class. Preliminary evaluation of class results indicate that students found MS-Windows based Petrel easy to learn. By the end of the semester, students were able to not only map horizons and faults using seismic and well data but also compute volumetrics. Exam results indicated that while students could complete sophisticated exercises using the software, their understanding of key concepts such as conservation of volume in a palinspastic reconstruction or association of structures with a particular stress regime was limited. Future classes will incorporate more paper and pencil exercises to illustrate basic concepts. The equipment, software, and exercises developed will be used in additional upper level undergraduate and graduate classes.
Staes, Catherine J; Altamore, Rita; Han, EunGyoung; Mottice, Susan; Rajeev, Deepthi; Bradshaw, Richard
2011-01-01
To control disease, laboratories and providers are required to report conditions to public health authorities. Reporting logic is defined in a variety of resources, but there is no single resource available for reporters to access the list of reportable events and computable reporting logic for any jurisdiction. In order to develop evidence-based requirements for authoring such knowledge, we evaluated reporting logic in the Council of State and Territorial Epidemiologist (CSTE) position statements to assess its readiness for automated systems and identify features that should be considered when designing an authoring interface; we evaluated codes in the Reportable Condition Mapping Tables (RCMT) relative to the nationally-defined reporting logic, and described the high level business processes and knowledge required to support laboratory-based public health reporting. We focused on logic for viral hepatitis. We found that CSTE tabular logic was unnecessarily complex (sufficient conditions superseded necessary and optional conditions) and was sometimes true for more than one reportable event: we uncovered major overlap in the logic between acute and chronic hepatitis B (52%), acute and Past and Present hepatitis C (90%). We found that the RCMT includes codes for all hepatitis criteria, but includes addition codes for tests not included in the criteria. The proportion of hepatitis variant-related codes included in RCMT that correspond to a criterion in the hepatitis-related position statements varied between hepatitis A (36%), acute hepatitis B (16%), chronic hepatitis B (64%), acute hepatitis C (96%), and past and present hepatitis C (96%). Public health epidemiologists have the need to communicate parameters other than just the name of a disease or organism that should be reported, such as the status and specimen sources. Existing knowledge resources should be integrated, harmonized and made computable. Our findings identified functionality that should be provided by future knowledge management systems to support epidemiologists as they communicate reporting rules for their jurisdiction. PMID:23569619
Laboratory Directed Research and Development Program FY 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen
2007-03-08
The Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab or LBNL) is a multi-program national research facility operated by the University of California for the Department of Energy (DOE). As an integral element of DOE's National Laboratory System, Berkeley Lab supports DOE's missions in fundamental science, energy resources, and environmental quality. Berkeley Lab programs advance four distinct goals for DOE and the nation: (1) To perform leading multidisciplinary research in the computing sciences, physical sciences, energy sciences, biosciences, and general sciences in a manner that ensures employee and public safety and protection of the environment. (2) To develop and operatemore » unique national experimental facilities for qualified investigators. (3) To educate and train future generations of scientists and engineers to promote national science and education goals. (4) To transfer knowledge and technological innovations and to foster productive relationships among Berkeley Lab's research programs, universities, and industry in order to promote national economic competitiveness.« less
Applications of computational modeling in ballistics
NASA Technical Reports Server (NTRS)
Sturek, Walter B.
1987-01-01
The development of the technology of ballistics as applied to gun launched Army weapon systems is the main objective of research at the U.S. Army Ballistic Research Laboratory (BRL). The primary research programs at the BRL consist of three major ballistic disciplines: exterior, interior, and terminal. The work done at the BRL in these areas was traditionally highly dependent on experimental testing. A considerable emphasis was placed on the development of computational modeling to augment the experimental testing in the development cycle; however, the impact of the computational modeling to this date is modest. With the availability of supercomputer computational resources recently installed at the BRL, a new emphasis on the application of computational modeling to ballistics technology is taking place. The major application areas are outlined which are receiving considerable attention at the BRL at present and to indicate the modeling approaches involved. An attempt was made to give some information as to the degree of success achieved and indicate the areas of greatest need.
Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud
Cianfrocco, Michael A; Leschziner, Andres E
2015-01-01
The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available ‘off-the-shelf’ computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16–480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM. DOI: http://dx.doi.org/10.7554/eLife.06664.001 PMID:25955969
Spira, Thomas; Lindegren, Mary Lou; Ferris, Robert; Habiyambere, Vincent; Ellerbrock, Tedd
2009-06-01
The expansion of HIV/AIDS care and treatment in resource-constrained countries, especially in sub-Saharan Africa, has generally developed in a top-down manner. Further expansion will involve primary health centers where human and other resources are limited. This article describes the World Health Organization/President's Emergency Plan for AIDS Relief collaboration formed to help scale up HIV services in primary health centers in high-prevalence, resource-constrained settings. It reviews the contents of the Operations Manual developed, with emphasis on the Laboratory Services chapter, which discusses essential laboratory services, both at the center and the district hospital level, laboratory safety, laboratory testing, specimen transport, how to set up a laboratory, human resources, equipment maintenance, training materials, and references. The chapter provides specific information on essential tests and generic job aids for them. It also includes annexes containing a list of laboratory supplies for the health center and sample forms.
NASA Astrophysics Data System (ADS)
O'Malley, D.; Vesselinov, V. V.
2017-12-01
Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.
Costa - Introduction to 2015 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, James E.
In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less
DB4US: A Decision Support System for Laboratory Information Management.
Carmona-Cejudo, José M; Hortas, Maria Luisa; Baena-García, Manuel; Lana-Linati, Jorge; González, Carlos; Redondo, Maximino; Morales-Bueno, Rafael
2012-11-14
Until recently, laboratory automation has focused primarily on improving hardware. Future advances are concentrated on intelligent software since laboratories performing clinical diagnostic testing require improved information systems to address their data processing needs. In this paper, we propose DB4US, an application that automates information related to laboratory quality indicators information. Currently, there is a lack of ready-to-use management quality measures. This application addresses this deficiency through the extraction, consolidation, statistical analysis, and visualization of data related to the use of demographics, reagents, and turn-around times. The design and implementation issues, as well as the technologies used for the implementation of this system, are discussed in this paper. To develop a general methodology that integrates the computation of ready-to-use management quality measures and a dashboard to easily analyze the overall performance of a laboratory, as well as automatically detect anomalies or errors. The novelty of our approach lies in the application of integrated web-based dashboards as an information management system in hospital laboratories. We propose a new methodology for laboratory information management based on the extraction, consolidation, statistical analysis, and visualization of data related to demographics, reagents, and turn-around times, offering a dashboard-like user web interface to the laboratory manager. The methodology comprises a unified data warehouse that stores and consolidates multidimensional data from different data sources. The methodology is illustrated through the implementation and validation of DB4US, a novel web application based on this methodology that constructs an interface to obtain ready-to-use indicators, and offers the possibility to drill down from high-level metrics to more detailed summaries. The offered indicators are calculated beforehand so that they are ready to use when the user needs them. The design is based on a set of different parallel processes to precalculate indicators. The application displays information related to tests, requests, samples, and turn-around times. The dashboard is designed to show the set of indicators on a single screen. DB4US was deployed for the first time in the Hospital Costa del Sol in 2008. In our evaluation we show the positive impact of this methodology for laboratory professionals, since the use of our application has reduced the time needed for the elaboration of the different statistical indicators and has also provided information that has been used to optimize the usage of laboratory resources by the discovery of anomalies in the indicators. DB4US users benefit from Internet-based communication of results, since this information is available from any computer without having to install any additional software. The proposed methodology and the accompanying web application, DB4US, automates the processing of information related to laboratory quality indicators and offers a novel approach for managing laboratory-related information, benefiting from an Internet-based communication mechanism. The application of this methodology has been shown to improve the usage of time, as well as other laboratory resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitek, M. A.; Lottes, S. A.; Bojanowski, C.
Computational fluid dynamics (CFD) modeling is widely used in industry for design and in the research community to support, compliment, and extend the scope of experimental studies. Analysis of transportation infrastructure using high performance cluster computing with CFD and structural mechanics software is done at the Transportation Research and Analysis Computing Center (TRACC) at Argonne National Laboratory. These resources, available at TRACC, were used to perform advanced three-dimensional computational simulations of the wind tunnel laboratory at the Turner-Fairbank Highway Research Center (TFHRC). The goals were to verify the CFD model of the laboratory wind tunnel and then to use versionsmore » of the model to provide the capability to (1) perform larger parametric series of tests than can be easily done in the laboratory with available budget and time, (2) to extend testing to wind speeds that cannot be achieved in the laboratory, and (3) to run types of tests that are very difficult or impossible to run in the laboratory. Modern CFD software has many physics models and domain meshing options. Models, including the choice of turbulence and other physics models and settings, the computational mesh, and the solver settings, need to be validated against measurements to verify that the results are sufficiently accurate for use in engineering applications. The wind tunnel model was built and tested, by comparing to experimental measurements, to provide a valuable tool to perform these types of studies in the future as a complement and extension to TFHRC’s experimental capabilities. Wind tunnel testing at TFHRC is conducted in a subsonic open-jet wind tunnel with a 1.83 m (6 foot) by 1.83 m (6 foot) cross section. A three component dual force-balance system is used to measure forces acting on tested models, and a three degree of freedom suspension system is used for dynamic response tests. Pictures of the room are shown in Figure 1-1 to Figure 1-4. A detailed CAD geometry and CFD model of the wind tunnel laboratory at TFHRC was built and tested. Results were compared against experimental wind velocity measurements at a large number of locations around the room. This testing included an assessment of the air flow uniformity provided by the tunnel to the test zone and assessment of room geometry effects, such as influence of the proximity the room walls, the non-symmetrical position of the tunnel in the room, and the influence of the room setup on the air flow in the room. This information is useful both for simplifying the computational model and in deciding whether or not moving, or removing, some of the furniture or other movable objects in the room will change the flow in the test zone.« less
2001-09-01
The high-tech art of digital signal processing (DSP) was pioneered at NASA's Jet Propulsion Laboratory (JPL) in the mid-1960s for use in the Apollo Lunar Landing Program. Designed to computer enhance pictures of the Moon, this technology became the basis for the Landsat Earth resources satellites and subsequently has been incorporated into a broad range of Earthbound medical and diagnostic tools. DSP is employed in advanced body imaging techniques including Computer-Aided Tomography, also known as CT and CATScan, and Magnetic Resonance Imaging (MRI). CT images are collected by irradiating a thin slice of the body with a fan-shaped x-ray beam from a number of directions around the body's perimeter. A tomographic (slice-like) picture is reconstructed from these multiple views by a computer. MRI employs a magnetic field and radio waves, rather than x-rays, to create images.
Networking Biology: The Origins of Sequence-Sharing Practices in Genomics.
Stevens, Hallam
2015-10-01
The wide sharing of biological data, especially nucleotide sequences, is now considered to be a key feature of genomics. Historians and sociologists have attempted to account for the rise of this sharing by pointing to precedents in model organism communities and in natural history. This article supplements these approaches by examining the role that electronic networking technologies played in generating the specific forms of sharing that emerged in genomics. The links between early computer users at the Stanford Artificial Intelligence Laboratory in the 1960s, biologists using local computer networks in the 1970s, and GenBank in the 1980s, show how networking technologies carried particular practices of communication, circulation, and data distribution from computing into biology. In particular, networking practices helped to transform sequences themselves into objects that had value as a community resource.
Roles of laboratories and laboratory systems in effective tuberculosis programmes
van Deun, Armand; Kam, Kai Man; Narayanan, PR; Aziz, Mohamed Abdul
2007-01-01
Abstract Laboratories and laboratory networks are a fundamental component of tuberculosis (TB) control, providing testing for diagnosis, surveillance and treatment monitoring at every level of the health-care system. New initiatives and resources to strengthen laboratory capacity and implement rapid and new diagnostic tests for TB will require recognition that laboratories are systems that require quality standards, appropriate human resources, and attention to safety in addition to supplies and equipment. To prepare the laboratory networks for new diagnostics and expanded capacity, we need to focus efforts on strengthening quality management systems (QMS) through additional resources for external quality assessment programmes for microscopy, culture, drug susceptibility testing (DST) and molecular diagnostics. QMS should also promote development of accreditation programmes to ensure adherence to standards to improve both the quality and credibility of the laboratory system within TB programmes. Corresponding attention must be given to addressing human resources at every level of the laboratory, with special consideration being given to new programmes for laboratory management and leadership skills. Strengthening laboratory networks will also involve setting up partnerships between TB programmes and those seeking to control other diseases in order to pool resources and to promote advocacy for quality standards, to develop strategies to integrate laboratories’ functions and to extend control programme activities to the private sector. Improving the laboratory system will assure that increased resources, in the form of supplies, equipment and facilities, will be invested in networks that are capable of providing effective testing to meet the goals of the Global Plan to Stop TB. PMID:17639219
Summary of 1971 pattern recognition program development
NASA Technical Reports Server (NTRS)
Whitley, S. L.
1972-01-01
Eight areas related to pattern recognition analysis at the Earth Resources Laboratory are discussed: (1) background; (2) Earth Resources Laboratory goals; (3) software problems/limitations; (4) operational problems/limitations; (5) immediate future capabilities; (6) Earth Resources Laboratory data analysis system; (7) general program needs and recommendations; and (8) schedule and milestones.
NASA Astrophysics Data System (ADS)
Lescinsky, D. T.; Wyborn, L. A.; Evans, B. J. K.; Allen, C.; Fraser, R.; Rankine, T.
2014-12-01
We present collaborative work on a generic, modular infrastructure for virtual laboratories (VLs, similar to science gateways) that combine online access to data, scientific code, and computing resources as services that support multiple data intensive scientific computing needs across a wide range of science disciplines. We are leveraging access to 10+ PB of earth science data on Lustre filesystems at Australia's National Computational Infrastructure (NCI) Research Data Storage Infrastructure (RDSI) node, co-located with NCI's 1.2 PFlop Raijin supercomputer and a 3000 CPU core research cloud. The development, maintenance and sustainability of VLs is best accomplished through modularisation and standardisation of interfaces between components. Our approach has been to break up tightly-coupled, specialised application packages into modules, with identified best techniques and algorithms repackaged either as data services or scientific tools that are accessible across domains. The data services can be used to manipulate, visualise and transform multiple data types whilst the scientific tools can be used in concert with multiple scientific codes. We are currently designing a scalable generic infrastructure that will handle scientific code as modularised services and thereby enable the rapid/easy deployment of new codes or versions of codes. The goal is to build open source libraries/collections of scientific tools, scripts and modelling codes that can be combined in specially designed deployments. Additional services in development include: provenance, publication of results, monitoring, workflow tools, etc. The generic VL infrastructure will be hosted at NCI, but can access alternative computing infrastructures (i.e., public/private cloud, HPC).The Virtual Geophysics Laboratory (VGL) was developed as a pilot project to demonstrate the underlying technology. This base is now being redesigned and generalised to develop a Virtual Hazards Impact and Risk Laboratory (VHIRL); any enhancements and new capabilities will be incorporated into a generic VL infrastructure. At same time, we are scoping seven new VLs and in the process, identifying other common components to prioritise and focus development.
Annotated Bibliography of the Air Force Human Resources Laboratory Technical Reports - 1979.
1981-05-01
Force Human Resources Laboratory, March 1980. (Covers all AFHRL projects.) NTIS. This document provides the academic and industrial R&D community with...D-AI02 04 AIR FORCE HUMAN RESOURCES LAB BROOKS AF TX F/G 5/2 ANNOTATED BIBLIOGRAPHY OF THE AIR FORCE HUMAN RESOURCES LABORAT--ETC(U) MAY 81 E M...OF THE AIR FORCE HUMAN RESOURCES LABORATORY TECHNICAL REPORTS - 1979U M By M Esther M. Barlow A N TECHNICAL SERVICES DIVISION Brooks Air Force Base
Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities
ERIC Educational Resources Information Center
Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David
2005-01-01
Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…
Kuiper, RuthAnne
2010-01-01
The utility of personal digital assistants (PDA) as a point of care resource in health care practice and education presents new challenges for nursing faculty. While there is a plethora of PDA resources available, little is known about the variables that effect student learning and technology adoption. In this study nursing students used PDA software programs which included a drug guide, medical dictionary, laboratory manual and nursing diagnosis manual during acute care clinical experiences. Analysis of student journals comparative reflective statements about the PDA as an adjunct to other available resources in clinical practice are presented. The benefits of having a PDA included readily available data, validation of thinking processes, and facilitation of care plan re-evaluation. Students reported increased frequency of use and independence. Significant correlations between user perceptions and computer self-efficacy suggested greater confidence in abilities with technology resulting in increased self-awareness and achievement of learning outcomes.
Multi-year Content Analysis of User Facility Related Publications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, Robert M; Stahl, Christopher G; Hines, Jayson
2013-01-01
Scientific user facilities provide resources and support that enable scientists to conduct experiments or simulations pertinent to their respective research. Consequently, it is critical to have an informed understanding of the impact and contributions that these facilities have on scientific discoveries. Leveraging insight into scientific publications that acknowledge the use of these facilities enables more informed decisions by facility management and sponsors in regard to policy, resource allocation, and influencing the direction of science as well as more effectively understand the impact of a scientific user facility. This work discusses preliminary results of mining scientific publications that utilized resources atmore » the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL). These results show promise in identifying and leveraging multi-year trends and providing a higher resolution view of the impact that a scientific user facility may have on scientific discoveries.« less
Performance of VPIC on Trinity
NASA Astrophysics Data System (ADS)
Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Li, H.; Nam, H. A.; Pang, X.; Rust, W. N., III; Wohlbier, J.; Yin, L.; Albright, B. J.
2016-10-01
Trinity is a new major DOE computing resource which is going through final acceptance testing at Los Alamos National Laboratory. Trinity has several new and unique architectural features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes. Additional unique features include use of on package high bandwidth memory (HBM) for the KNL nodes, the ability to configure the KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to port and optimize VPIC to Trinity and evaluate its performance. Because VPIC was recently released as Open Source, it is being used as part of acceptance testing for Trinity and is participating in the Trinity Open Science Program which has resulted in excellent collaboration activities with both Cray and Intel. Results of this work will be presented on performance of VPIC on both Haswell and KNL partitions for both single node runs and runs at scale. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baer, Marcel D.; Kuo, I-F W.; Tobias, Douglas J.
2014-07-17
The propensities of the water self ions, H3O+ and OH- , for the air-water interface has implications for interfacial acid-base chemistry. Despite numerous experimental and computational studies, no consensus has been reached on the question of whether or not H3O+ and/or OH- prefer to be at the water surface or in the bulk. Here we report a molecular dynamics simulation study of the bulk vs. interfacial behavior of H3O+ and OH- that employs forces derived from density functional theory with a generalized gradient approximation exchangecorrelation functional (specifically, BLYP) and empirical dispersion corrections. We computed the potential of mean force (PMF)more » for H3O+ as a function of the position of the ion in a 215-molecule water slab. The PMF is flat, suggesting that H3O+ has equal propensity for the air-water interface and the bulk. We compare the PMF for H3O+ to our previously computed PMF for OH- adsorption, which contains a shallow minimum at the interface, and we explore how differences in solvation of each ion at the interface vs. the bulk are connected with interfacial propensity. We find that the solvation shell of H3O+ is only slightly dependent on its position in the water slab, while OH- partially desolvates as it approaches the interface, and we examine how this difference in solvation behavior is manifested in the electronic structure and chemistry of the two ions. DJT was supported by National Science Foundation grant CHE-0909227. CJM was supported by the U.S. Department of Energy‘s (DOE) Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is operated for the Department of Energy by Battelle. The potential of mean force required resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DEAC05-00OR22725. The remaining simulations and analysis used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. at at Lawrence Berkeley National Laboratory. MDB is grateful for the support of the Linus Pauling Distinguished Postdoctoral Fellowship Program at PNNL.« less
Strategies for teaching pathology to graduate students and allied health professionals.
Fenderson, Bruce A
2005-02-01
Pathology is an essential course for many students in the biomedical sciences and allied health professions. These students learn the language of pathology and medicine, develop an appreciation for mechanisms of disease, and understand the close relationship between basic research and clinical medicine. We have developed 3 pathology courses to meet the needs of our undergraduates, graduate students, and allied health professionals. Through experience, we have settled on an approach to teaching pathology that takes into account the diverse educational backgrounds of these students. Educational resources such as assigned reading, online homework, lectures, and review sessions are carefully balanced to adjust course difficulty. Common features of our pathology curricula include a web-based computer laboratory and review sessions on the basis of selected pathology images and open-ended study questions. Lectures, computer-guided homework, and review sessions provide the core educational content for undergraduates. Graduate students, using the same computer program and review material, rely more heavily on assigned reading for core educational content. Our experience adapting a pathology curriculum to the needs of divergent groups of students suggests a general strategy for monitoring course difficulty. We hypothesize that course difficulty is proportional to the information density of specific learning resources (eg, lecture or textbook) multiplied by the weight of those learning resources placed on examinations. This formula allows educators to match the difficulty of a course with the educational needs of students, and provides a useful tool for longitudinal studies of curriculum reform.
A Framework for Understanding Physics Students' Computational Modeling Practices
NASA Astrophysics Data System (ADS)
Lunk, Brandon Robert
With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by their existing physics content knowledge, particularly their knowledge of analytic procedures. While this existing knowledge was often applied in inappropriate circumstances, the students were still able to display a considerable amount of understanding of the physics content and of analytic solution procedures. These observations could not be adequately accommodated by the existing literature of programming comprehension. In extending the resource framework to the task of computational modeling, I model students' practices in terms of three important elements. First, a knowledge base includes re- sources for understanding physics, math, and programming structures. Second, a mechanism for monitoring and control describes students' expectations as being directed towards numerical, analytic, qualitative or rote solution approaches and which can be influenced by the problem representation. Third, a set of solution approaches---many of which were identified in this study---describe what aspects of the knowledge base students use and how they use that knowledge to enact their expectations. This framework allows us as researchers to track student discussions and pinpoint the source of difficulties. This work opens up many avenues of potential research. First, this framework gives researchers a vocabulary for extending Resource Theory to other domains of instruction, such as modeling how physics students use graphs. Second, this framework can be used as the basis for modeling expert physicists' programming practices. Important instructional implications also follow from this research. Namely, as we broaden the use of computational modeling in the physics classroom, our instructional practices should focus on helping students understand the step-by-step nature of programming in contrast to the already salient analytic procedures.
Computer laboratory in medical education for medical students.
Hercigonja-Szekeres, Mira; Marinović, Darko; Kern, Josipa
2009-01-01
Five generations of second year students at the Zagreb University School of Medicine were interviewed through an anonymous questionnaire on their use of personal computers, Internet, computer laboratories and computer-assisted education in general. Results show an advance in students' usage of information and communication technology during the period from 1998/99 to 2002/03. However, their positive opinion about computer laboratory depends on installed capacities: the better the computer laboratory technology, the better the students' acceptance and use of it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.
Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for coarse grained models of electrolyte solution. Here, we provide rigorous definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation (DFT-MD) and isolate the effects of charge and cavitation,more » comparing to the Born (linear response) model. We show that using uncorrected Ewald summation leads to highly unphysical values for the solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry (CHA) for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. This suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation. We would like to thank Thomas Beck, Shawn Kathmann, Richard Remsing and John Weeks for helpful discussions. Computing resources were generously allocated by PNNL's Institutional Computing program. This research also used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. TTD, GKS, and CJM were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. MDB was supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative, a Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated by Battelle for the U.S. Department of Energy.« less
The Penn State ORSER system for processing and analyzing ERTS and other MSS data
NASA Technical Reports Server (NTRS)
Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator); Borden, F. Y.; Weeden, H. A.
1974-01-01
The author has identified the following significant results. The office for Remote Sensing of Earth Resources (ORSER) of the Space Science and Engineering Laboratory at the Pennsylvania State University has developed an extensive operational system for processing and analyzing ERTS-1 and similar multispectral data. The ORSER system was developed for use by a wide variety of researchers working in remote sensing. Both photointerpretive techniques and automatic computer processing methods have been developed and used, separately and in a combined approach. A remote Job Entry system permits use of an IBM 370/168 computer from any compatible remote terminal, including equipment tied in by long distance telephone connections. An elementary cost analysis has been prepared for the processing of ERTS data.
Controlling user access to electronic resources without password
Smith, Fred Hewitt
2015-06-16
Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.
Templet Web: the use of volunteer computing approach in PaaS-style cloud
NASA Astrophysics Data System (ADS)
Vostokin, Sergei; Artamonov, Yuriy; Tsarev, Daniil
2018-03-01
This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a) the implementation of "on-demand" access; (b) source code deployment management; (c) high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.
Llewellyn Hilleth Thomas: An appraisal of an under-appreciated polymath
NASA Astrophysics Data System (ADS)
Jackson, John David
2010-02-01
Llewellyn Hilleth Thomas was born in 1903 and died in 1992 at the age of 88. His name is known by most for only two things, Thomas precession and the Thomas-Fermi atom. The many other facets of his career - astrophysics, atomic and molecular physics, nonlinear problems, accelerator physics, magnetohydrodynamics, computer design principles and software and hardware - are largely unknown or forgotten. I review his whole career - his early schooling, his time at Cambridge, then Copenhagen in 1925-26, and back to Cambridge, his move to the US as an assistant professor at Ohio State University in 1929, his wartime years at the Ballistic Research Laboratory, Aberdeen Proving Grounds, then in 1946 his new career as a unique resource at IBM's Watson Scientific Computing Laboratory and Columbia University until his first retirement in 1968, and his twilight years at North Carolina State University. Although the Thomas precession and the Thomas-Fermi atom may be the jewels in his crown, his many other accomplishments add to our appreciation of this consummate applied mathematician and physicist. )
NASA Technical Reports Server (NTRS)
1998-01-01
This video is a collection of computer animations and live footage showing the construction and assembly of the International Space Station (ISS). Computer animations show the following: (1) ISS fly around; (2) ISS over a sunrise seen from space; (3) the launch of the Zarya Control Module; (4) a Proton rocket launch; (5) the Space Shuttle docking with Zarya and attaching Zarya to the Unity Node; (6) the docking of the Service Module, Zarya, and Unity to Soyuz; (7) the Space Shuttle docking to ISS and installing the Z1 Truss segment and the Pressurized Mating Adapter (PMA); (8) Soyuz docking to the ISS; (9) the Transhab components; and (10) a complete ISS assembly. Live footage shows the construction of Zarya, the Proton rocket, Unity Node, PMA, Service Module, US Laboratory, Italian Multipurpose Logistics Module, US Airlock, and the US Habitation Module. STS-88 Mission Specialists Jerry Ross and James Newman are seen training in the Neutral Buoyancy Laboratory (NBL). The Expedition 1 crewmembers, William Shepherd, Yuri Gidzenko, and Sergei Krikalev, are shown training in the Black Sea and at Johnson Space Flight Center for water survival.
NASA Technical Reports Server (NTRS)
1990-01-01
NASA formally launched Project LASER (Learning About Science, Engineering and Research) in March 1990, a program designed to help teachers improve science and mathematics education and to provide 'hands on' experiences. It featured the first LASER Mobile Teacher Resource Center (MTRC), is designed to reach educators all over the nation. NASA hopes to operate several MTRCs with funds provided by private industry. The mobile unit is a 22-ton tractor-trailer stocked with NASA educational publications and outfitted with six work stations. Each work station, which can accommodate two teachers at a time, has a computer providing access to NASA Spacelink. Each also has video recorders and photocopy/photographic equipment for the teacher's use. MTRC is only one of the five major elements within LASER. The others are: a Space Technology Course, to promote integration of space science studies with traditional courses; the Volunteer Databank, in which NASA employees are encouraged to volunteer as tutors, instructors, etc; Mobile Discovery Laboratories that will carry simple laboratory equipment and computers to provide hands-on activities for students and demonstrations of classroom activities for teachers; and the Public Library Science Program which will present library based science and math programs.
Effect of nacelle on wake meandering in a laboratory scale wind turbine using LES
NASA Astrophysics Data System (ADS)
Foti, Daniel; Yang, Xiaolei; Guala, Michele; Sotiropoulos, Fotis
2015-11-01
Wake meandering, large scale motion in the wind turbine wakes, has considerable effects on the velocity deficit and turbulence intensity in the turbine wake from the laboratory scale to utility scale wind turbines. In the dynamic wake meandering model, the wake meandering is assumed to be caused by large-scale atmospheric turbulence. On the other hand, Kang et al. (J. Fluid Mech., 2014) demonstrated that the nacelle geometry has a significant effect on the wake meandering of a hydrokinetic turbine, through the interaction of the inner wake of the nacelle vortex with the outer wake of the tip vortices. In this work, the significance of the nacelle on the wake meandering of a miniature wind turbine previously used in experiments (Howard et al., Phys. Fluid, 2015) is demonstrated with large eddy simulations (LES) using immersed boundary method with fine enough grids to resolve the turbine geometric characteristics. The three dimensionality of the wake meandering is analyzed in detail through turbulent spectra and meander reconstruction. The computed flow fields exhibit wake dynamics similar to those observed in the wind tunnel experiments and are analyzed to shed new light into the role of the energetic nacelle vortex on wake meandering. This work was supported by Department of Energy DOE (DE-EE0002980, DE-EE0005482 and DE-AC04-94AL85000), and Sandia National Laboratories. Computational resources were provided by Sandia National Laboratories and the University of Minnesota Supercomputing.
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Importance/performance analysis: a tool for service quality control by clinical laboratories.
Scammon, D L; Weiss, R
1991-01-01
A study of customer satisfaction with clinical laboratory service is used as the basis for identifying potential improvements in service and more effectively targeting marketing activities to enhance customer satisfaction. Data on customer satisfaction are used to determine the aspects of service most critical to customers, how well the organization is doing in delivery of service, and how consistent service delivery is. Importance-performance analysis is used to highlight areas for future resource reallocation and strategic emphasis. Suggestions include the establishment of performance guidelines for customer contact personnel, the enhancement of timely delivery of reports via electronic transmission (computer and fax), and the development of standardized graphics for request and report forms to facilitate identification of appropriate request forms and guide clients to key items of information on reports.
NASA Astrophysics Data System (ADS)
Kun, Luis G.
1994-12-01
On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.
U.S. Army Research Laboratory (ARL) multimodal signatures database
NASA Astrophysics Data System (ADS)
Bennett, Kelly
2008-04-01
The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) is a centralized collection of sensor data of various modalities that are co-located and co-registered. The signatures include ground and air vehicles, personnel, mortar, artillery, small arms gunfire from potential sniper weapons, explosives, and many other high value targets. This data is made available to Department of Defense (DoD) and DoD contractors, Intel agencies, other government agencies (OGA), and academia for use in developing target detection, tracking, and classification algorithms and systems to protect our Soldiers. A platform independent Web interface disseminates the signatures to researchers and engineers within the scientific community. Hierarchical Data Format 5 (HDF5) signature models provide an excellent solution for the sharing of complex multimodal signature data for algorithmic development and database requirements. Many open source tools for viewing and plotting HDF5 signatures are available over the Web. Seamless integration of HDF5 signatures is possible in both proprietary computational environments, such as MATLAB, and Free and Open Source Software (FOSS) computational environments, such as Octave and Python, for performing signal processing, analysis, and algorithm development. Future developments include extending the Web interface into a portal system for accessing ARL algorithms and signatures, High Performance Computing (HPC) resources, and integrating existing database and signature architectures into sensor networking environments.
2001-01-01
The high-tech art of digital signal processing (DSP) was pioneered at NASA's Jet Propulsion Laboratory (JPL) in the mid-1960s for use in the Apollo Lunar Landing Program. Designed to computer enhance pictures of the Moon, this technology became the basis for the Landsat Earth resources satellites and subsequently has been incorporated into a broad range of Earthbound medical and diagnostic tools. DSP is employed in advanced body imaging techniques including Computer-Aided Tomography, also known as CT and CATScan, and Magnetic Resonance Imaging (MRI). CT images are collected by irradiating a thin slice of the body with a fan-shaped x-ray beam from a number of directions around the body's perimeter. A tomographic (slice-like) picture is reconstructed from these multiple views by a computer. MRI employs a magnetic field and radio waves, rather than x-rays, to create images. In this photograph, a patient undergoes an open MRI.
Using the Computer as a Laboratory Instrument.
ERIC Educational Resources Information Center
Collings, Peter J.; Greenslade, Thomas B., Jr.
1989-01-01
Reports experiences during a two-year period in introducing the computer to the laboratory and students to the computer as a laboratory instrument. Describes a working philosophy, data acquisition system, and experiments. Summarizes the laboratory procedures of nine experiments, covering mechanics, heat, electromagnetism, and optics. (YP)
Research on elastic resource management for multi-queue under cloud computing environment
NASA Astrophysics Data System (ADS)
CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang
2017-10-01
As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.
NASA Technical Reports Server (NTRS)
Hoffer, R. M.
1975-01-01
Skylab data were obtained over a mountainous test site containing a complex association of cover types and rugged topography. The application of computer-aided analysis techniques to the multispectral scanner data produced a number of significant results. Techniques were developed to digitally overlay topographic data (elevation, slope, and aspect) onto the S-192 MSS data to provide a method for increasing the effectiveness and accuracy of computer-aided analysis techniques for cover type mapping. The S-192 MSS data were analyzed using computer techniques developed at Laboratory for Applications of Remote Sensing (LARS), Purdue University. Land use maps, forest cover type maps, snow cover maps, and area tabulations were obtained and evaluated. These results compared very well with information obtained by conventional techniques. Analysis of the spectral characteristics of Skylab data has conclusively proven the value of the middle infrared portion of the spectrum (about 1.3-3.0 micrometers), a wavelength region not previously available in multispectral satellite data.
Laboratory Directed Research and Development Annual Report for 2009
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Pamela J.
This report documents progress made on all LDRD-funded projects during fiscal year 2009. As a US Department of Energy (DOE) Office of Science (SC) national laboratory, Pacific Northwest National Laboratory (PNNL) has an enduring mission to bring molecular and environmental sciences and engineering strengths to bear on DOE missions and national needs. Their vision is to be recognized worldwide and valued nationally for leadership in accelerating the discovery and deployment of solutions to challenges in energy, national security, and the environment. To achieve this mission and vision, they provide distinctive, world-leading science and technology in: (1) the design and scalablemore » synthesis of materials and chemicals; (2) climate change science and emissions management; (3) efficient and secure electricity management from generation to end use; and (4) signature discovery and exploitation for threat detection and reduction. PNNL leadership also extends to operating EMSL: the Environmental Molecular Sciences Laboratory, a national scientific user facility dedicated to providing itnegrated experimental and computational resources for discovery and technological innovation in the environmental molecular sciences.« less
An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Randal Scott
CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest andmore » emerging HPC systems.« less
Implementing a resource management program for accreditation process at the medical laboratory.
Yenice, Sedef
2009-03-01
To plan for and provide adequate resources to meet the mission and goals of a medical laboratory in compliance with the requirements for laboratory accreditation by Joint Commission International. The related policies and procedures were developed based on standard requirements for resource management. Competency assessment provided continuing education and performance feedback to laboratory employees. Laboratory areas were designed for the efficient and safe performance of laboratory work. A physical environment was built up where hazards were controlled and personnel activities were managed to reduce the risk of injuries. An Employees Occupational Safety and Health Program (EOSHP) was developed to address all types of hazardous materials and wastes. Guidelines were defined to verify that the methods would produce accurate and reliable results. An active resource management program will be an effective way of assuring that systems are in control and continuous improvement is in progress.
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
Procedure for extraction of disparate data from maps into computerized data bases
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1979-01-01
A procedure is presented for extracting disparate sources of data from geographic maps and for the conversion of these data into a suitable format for processing on a computer-oriented information system. Several graphic digitizing considerations are included and related to the NASA Earth Resources Laboratory's Digitizer System. Current operating procedures for the Digitizer System are given in a simplified and logical manner. The report serves as a guide to those organizations interested in converting map-based data by using a comparable map digitizing system.
DB4US: A Decision Support System for Laboratory Information Management
Hortas, Maria Luisa; Baena-García, Manuel; Lana-Linati, Jorge; González, Carlos; Redondo, Maximino; Morales-Bueno, Rafael
2012-01-01
Background Until recently, laboratory automation has focused primarily on improving hardware. Future advances are concentrated on intelligent software since laboratories performing clinical diagnostic testing require improved information systems to address their data processing needs. In this paper, we propose DB4US, an application that automates information related to laboratory quality indicators information. Currently, there is a lack of ready-to-use management quality measures. This application addresses this deficiency through the extraction, consolidation, statistical analysis, and visualization of data related to the use of demographics, reagents, and turn-around times. The design and implementation issues, as well as the technologies used for the implementation of this system, are discussed in this paper. Objective To develop a general methodology that integrates the computation of ready-to-use management quality measures and a dashboard to easily analyze the overall performance of a laboratory, as well as automatically detect anomalies or errors. The novelty of our approach lies in the application of integrated web-based dashboards as an information management system in hospital laboratories. Methods We propose a new methodology for laboratory information management based on the extraction, consolidation, statistical analysis, and visualization of data related to demographics, reagents, and turn-around times, offering a dashboard-like user web interface to the laboratory manager. The methodology comprises a unified data warehouse that stores and consolidates multidimensional data from different data sources. The methodology is illustrated through the implementation and validation of DB4US, a novel web application based on this methodology that constructs an interface to obtain ready-to-use indicators, and offers the possibility to drill down from high-level metrics to more detailed summaries. The offered indicators are calculated beforehand so that they are ready to use when the user needs them. The design is based on a set of different parallel processes to precalculate indicators. The application displays information related to tests, requests, samples, and turn-around times. The dashboard is designed to show the set of indicators on a single screen. Results DB4US was deployed for the first time in the Hospital Costa del Sol in 2008. In our evaluation we show the positive impact of this methodology for laboratory professionals, since the use of our application has reduced the time needed for the elaboration of the different statistical indicators and has also provided information that has been used to optimize the usage of laboratory resources by the discovery of anomalies in the indicators. DB4US users benefit from Internet-based communication of results, since this information is available from any computer without having to install any additional software. Conclusions The proposed methodology and the accompanying web application, DB4US, automates the processing of information related to laboratory quality indicators and offers a novel approach for managing laboratory-related information, benefiting from an Internet-based communication mechanism. The application of this methodology has been shown to improve the usage of time, as well as other laboratory resources. PMID:23608745
NASA Astrophysics Data System (ADS)
Schulthess, Thomas C.
2013-03-01
The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.
Signal and image processing algorithm performance in a virtual and elastic computing environment
NASA Astrophysics Data System (ADS)
Bennett, Kelly W.; Robertson, James
2013-05-01
The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.
A study of computer graphics technology in application of communication resource management
NASA Astrophysics Data System (ADS)
Li, Jing; Zhou, Liang; Yang, Fei
2017-08-01
With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.
Remote photonic metrology in the conservation of cultural heritage
NASA Astrophysics Data System (ADS)
Tornari, Vivi; Pedrini, G.; Osten, W.
2013-05-01
Photonic technologies play a leading innovative role of research in the fields of Cultural Heritage (CH) conservation, preservation and digitisation. In particular photonic technologies have introduced a new indispensable era of research in the conservation of cultural artefacts expanding from decorative objects, paintings, sculptures, monuments to archaeological sites and including fields of application as diverse as materials characterisation to restoration practices and from defect topography to 3d artwork reconstruction. Thus the last two decades photonic technologies have emerged as unique answer or most competitive alternative into many long-term standing disputes in conservation and restoration of Cultural Heritage. Despite the impressive advances on the state-of-the-art ranging from custom-made system development to new methods and practises, photonic research and technological developments remain incoherently scattered and fragmented with a significant amount of duplication of work and misuse of resources. In this context, further progress should aim to capitalise on the so far achieved milestones in any of the diverse applications flourished in the field of CH. Embedding of experimental facilities and conclusions seems the only way to secure the progress beyond the existing state of the art and its false use. The solution to this embedment seems possible through the new computing environments. Cloud computing environment and remote laboratory access hold the missing research objective to bring the leading research together and integrate the achievements. The cloud environment would allow experts from museums, galleries, historical sites, art historians, conservators, scientists and technologists, conservation and technical laboratories and SMEs to interact their research, communicate their achievements and share data and resources. The main instrument of this integration is the creation of a common research platform termed here Virtual Laboratory allowing not only remote research, inspection and evaluation, but also providing the results to the members and the public with instant and simultaneous access to necessary information, knowledge and technologies. In this paper it is presented the concept and first results confirming the potential of implementing metrology techniques as remote digital laboratory facilities in artwork structural assessment. The method paves the way of the general objective to introduce remote photonic technologies in the sensitive field of Cultural Heritage.
Idaho National Laboratory Cultural Resource Management Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julie Braun Williams
As a federal agency, the U.S. Department of Energy has been directed by Congress, the U.S. president, and the American public to provide leadership in the preservation of prehistoric, historic, and other cultural resources on the lands it administers. This mandate to preserve cultural resources in a spirit of stewardship for the future is outlined in various federal preservation laws, regulations, and guidelines such as the National Historic Preservation Act, the Archaeological Resources Protection Act, and the National Environmental Policy Act. The purpose of this Cultural Resource Management Plan is to describe how the Department of Energy, Idaho Operations Officemore » will meet these responsibilities at Idaho National Laboratory in southeastern Idaho. The Idaho National Laboratory is home to a wide variety of important cultural resources representing at least 13,500 years of human occupation in the southeastern Idaho area. These resources are nonrenewable, bear valuable physical and intangible legacies, and yield important information about the past, present, and perhaps the future. There are special challenges associated with balancing the preservation of these sites with the management and ongoing operation of an active scientific laboratory. The Department of Energy, Idaho Operations Office is committed to a cultural resource management program that accepts these challenges in a manner reflecting both the spirit and intent of the legislative mandates. This document is designed for multiple uses and is intended to be flexible and responsive to future changes in law or mission. Document flexibility and responsiveness will be assured through regular reviews and as-needed updates. Document content includes summaries of Laboratory cultural resource philosophy and overall Department of Energy policy; brief contextual overviews of Laboratory missions, environment, and cultural history; and an overview of cultural resource management practices. A series of appendices provides important details that support the main text.« less
Visualizing Coolant Flow in Sodium Reactor Subassemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-01-01
Uniformity of temperature controls peak power output. Interchannel cross-flow is the principal cross-assembly energy transport mechanism. The areas of fastest flow all occur at the exterior of the assembly. Further, the fast moving region winds around the assembly in a continuous swath. This Nek5000 simulation uses an unstructured mesh with over one billion grid points, resulting in five billion degrees of freedom per time slice. High speed patches of turbulence due to vertex shedding downstream of the wires persist for about a quarter of the wire-wrap periodic length. Credits: Science: Paul Fisher and Aleks Obabko, Argonne National Laboratory. Visualization: Hankmore » Childs and Janet Jacobsen, Lawrence Berkeley National Laboratory. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Dept. of Energy under contract DE-AC02-06CH11357. This research was sponsored by the Department of Energy's Office of Nuclear Energy's NEAMS program.« less
Laboratory Directed Research and Development FY2011 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, W; Sketchley, J; Kotta, P
2012-03-22
A premier applied-science laboratory, Lawrence Livermore National Laboratory (LLNL) has earned the reputation as a leader in providing science and technology solutions to the most pressing national and global security problems. The LDRD Program, established by Congress at all DOE national laboratories in 1991, is LLNL's most important single resource for fostering excellent science and technology for today's needs and tomorrow's challenges. The LDRD internally directed research and development funding at LLNL enables high-risk, potentially high-payoff projects at the forefront of science and technology. The LDRD Program at Livermore serves to: (1) Support the Laboratory's missions, strategic plan, and foundationalmore » science; (2) Maintain the Laboratory's science and technology vitality; (3) Promote recruiting and retention; (4) Pursue collaborations; (5) Generate intellectual property; and (6) Strengthen the U.S. economy. Myriad LDRD projects over the years have made important contributions to every facet of the Laboratory's mission and strategic plan, including its commitment to nuclear, global, and energy and environmental security, as well as cutting-edge science and technology and engineering in high-energy-density matter, high-performance computing and simulation, materials and chemistry at the extremes, information systems, measurements and experimental science, and energy manipulation. A summary of each project was submitted by the principal investigator. Project summaries include the scope, motivation, goals, relevance to DOE/NNSA and LLNL mission areas, the technical progress achieved in FY11, and a list of publications that resulted from the research. The projects are: (1) Nuclear Threat Reduction; (2) Biosecurity; (3) High-Performance Computing and Simulation; (4) Intelligence; (5) Cybersecurity; (6) Energy Security; (7) Carbon Capture; (8) Material Properties, Theory, and Design; (9) Radiochemistry; (10) High-Energy-Density Science; (11) Laser Inertial-Fusion Energy; (12) Advanced Laser Optical Systems and Applications; (12) Space Security; (13) Stockpile Stewardship Science; (14) National Security; (15) Alternative Energy; and (16) Climatic Change.« less
Rosenegger, David G; Tran, Cam Ha T; LeDue, Jeffery; Zhou, Ning; Gordon, Grant R
2014-01-01
Two-photon laser scanning microscopy has revolutionized the ability to delineate cellular and physiological function in acutely isolated tissue and in vivo. However, there exist barriers for many laboratories to acquire two-photon microscopes. Additionally, if owned, typical systems are difficult to modify to rapidly evolving methodologies. A potential solution to these problems is to enable scientists to build their own high-performance and adaptable system by overcoming a resource insufficiency. Here we present a detailed hardware resource and protocol for building an upright, highly modular and adaptable two-photon laser scanning fluorescence microscope that can be used for in vitro or in vivo applications. The microscope is comprised of high-end componentry on a skeleton of off-the-shelf compatible opto-mechanical parts. The dedicated design enabled imaging depths close to 1 mm into mouse brain tissue and a signal-to-noise ratio that exceeded all commercial two-photon systems tested. In addition to a detailed parts list, instructions for assembly, testing and troubleshooting, our plan includes complete three dimensional computer models that greatly reduce the knowledge base required for the non-expert user. This open-source resource lowers barriers in order to equip more laboratories with high-performance two-photon imaging and to help progress our understanding of the cellular and physiological function of living systems.
Rosenegger, David G.; Tran, Cam Ha T.; LeDue, Jeffery; Zhou, Ning; Gordon, Grant R.
2014-01-01
Two-photon laser scanning microscopy has revolutionized the ability to delineate cellular and physiological function in acutely isolated tissue and in vivo. However, there exist barriers for many laboratories to acquire two-photon microscopes. Additionally, if owned, typical systems are difficult to modify to rapidly evolving methodologies. A potential solution to these problems is to enable scientists to build their own high-performance and adaptable system by overcoming a resource insufficiency. Here we present a detailed hardware resource and protocol for building an upright, highly modular and adaptable two-photon laser scanning fluorescence microscope that can be used for in vitro or in vivo applications. The microscope is comprised of high-end componentry on a skeleton of off-the-shelf compatible opto-mechanical parts. The dedicated design enabled imaging depths close to 1 mm into mouse brain tissue and a signal-to-noise ratio that exceeded all commercial two-photon systems tested. In addition to a detailed parts list, instructions for assembly, testing and troubleshooting, our plan includes complete three dimensional computer models that greatly reduce the knowledge base required for the non-expert user. This open-source resource lowers barriers in order to equip more laboratories with high-performance two-photon imaging and to help progress our understanding of the cellular and physiological function of living systems. PMID:25333934
NASA Astrophysics Data System (ADS)
Morikawa, Y.; Murata, K. T.; Watari, S.; Kato, H.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Shimojo, S.
2010-12-01
Main methodologies of Solar-Terrestrial Physics (STP) so far are theoretical, experimental and observational, and computer simulation approaches. Recently "informatics" is expected as a new (fourth) approach to the STP studies. Informatics is a methodology to analyze large-scale data (observation data and computer simulation data) to obtain new findings using a variety of data processing techniques. At NICT (National Institute of Information and Communications Technology, Japan) we are now developing a new research environment named "OneSpaceNet". The OneSpaceNet is a cloud-computing environment specialized for science works, which connects many researchers with high-speed network (JGN: Japan Gigabit Network). The JGN is a wide-area back-born network operated by NICT; it provides 10G network and many access points (AP) over Japan. The OneSpaceNet also provides with rich computer resources for research studies, such as super-computers, large-scale data storage area, licensed applications, visualization devices (like tiled display wall: TDW), database/DBMS, cluster computers (4-8 nodes) for data processing and communication devices. What is amazing in use of the science cloud is that a user simply prepares a terminal (low-cost PC). Once connecting the PC to JGN2plus, the user can make full use of the rich resources of the science cloud. Using communication devices, such as video-conference system, streaming and reflector servers, and media-players, the users on the OneSpaceNet can make research communications as if they belong to a same (one) laboratory: they are members of a virtual laboratory. The specification of the computer resources on the OneSpaceNet is as follows: The size of data storage we have developed so far is almost 1PB. The number of the data files managed on the cloud storage is getting larger and now more than 40,000,000. What is notable is that the disks forming the large-scale storage are distributed to 5 data centers over Japan (but the storage system performs as one disk). There are three supercomputers allocated on the cloud, one from Tokyo, one from Osaka and the other from Nagoya. One's simulation job data on any supercomputers are saved on the cloud data storage (same directory); it is a kind of virtual computing environment. The tiled display wall has 36 panels acting as one display; the pixel (resolution) size of it is as large as 18000x4300. This size is enough to preview or analyze the large-scale computer simulation data. It also allows us to take a look of multiple (e.g., 100 pictures) on one screen together with many researchers. In our talk we also present a brief report of the initial results using the OneSpaceNet for Global MHD simulations as an example of successful use of our science cloud; (i) Ultra-high time resolution visualization of Global MHD simulations on the large-scale storage and parallel processing system on the cloud, (ii) Database of real-time Global MHD simulation and statistic analyses of the data, and (iii) 3D Web service of Global MHD simulations.
A resource management architecture based on complex network theory in cloud computing federation
NASA Astrophysics Data System (ADS)
Zhang, Zehua; Zhang, Xuejie
2011-10-01
Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.
Effectiveness of a computer-based tutorial for teaching how to make a blood smear.
Preast, Vanessa; Danielson, Jared; Bender, Holly; Bousson, Maury
2007-09-01
Computer-aided instruction (CAI) was developed to teach veterinary students how to make blood smears. This instruction was intended to replace the traditional instructional method in order to promote efficient use of faculty resources while maintaining learning outcomes and student satisfaction. The purpose of this study was to evaluate the effect of a computer-aided blood smear tutorial on 1) instructor's teaching time, 2) students' ability to make blood smears, and 3) students' ability to recognize smear quality. Three laboratory sessions for senior veterinary students were taught using traditional methods (control group) and 4 sessions were taught using the CAI tutorial (experimental group). Students in the control group received a short demonstration and lecture by the instructor at the beginning of the laboratory and then practiced making blood smears. Students in the experimental group received their instruction through the self-paced, multimedia tutorial on a laptop computer and then practiced making blood smears. Data was collected from observation, interview, survey questionnaires, and smear evaluation by students and experts using a scoring rubric. Students using the CAI made better smears and were better able to recognize smear quality. The average time the instructor spent in the room was not significantly different between groups, but the quality of the instructor time was improved with the experimental instruction. The tutorial implementation effectively provided students and instructors with a teaching and learning experience superior to the traditional method of instruction. Using CAI is a viable method of teaching students to make blood smears.
Technology Systems. Laboratory Activities.
ERIC Educational Resources Information Center
Brame, Ray; And Others
This guide contains 43 modules of laboratory activities for technology education courses. Each module includes an instructor's resource sheet and the student laboratory activity. Instructor's resource sheets include some or all of the following elements: module number, course title, activity topic, estimated time, essential elements, objectives,…
Low-Cost Virtual Laboratory Workbench for Electronic Engineering
ERIC Educational Resources Information Center
Achumba, Ifeyinwa E.; Azzi, Djamel; Stocker, James
2010-01-01
The laboratory component of undergraduate engineering education poses challenges in resource constrained engineering faculties. The cost, time, space and physical presence requirements of the traditional (real) laboratory approach are the contributory factors. These resource constraints may mitigate the acquisition of meaningful laboratory…
Computer-Design Drawing for NASA 2020 Mars Rover
2016-07-15
NASA's 2020 Mars rover mission will go to a region of Mars thought to have offered favorable conditions long ago for microbial life, and the rover will search for signs of past life there. It will also collect and cache samples for potential return to Earth, for many types of laboratory analysis. As a pioneering step toward how humans on Mars will use the Red Planet's natural resources, the rover will extract oxygen from the Martian atmosphere. This 2016 image comes from computer-assisted-design work on the 2020 rover. The design leverages many successful features of NASA's Curiosity rover, which landed on Mars in 2012, but it adds new science instruments and a sampling system to carry out the new goals for the mission. http://photojournal.jpl.nasa.gov/catalog/PIA20759
A resource-sharing model based on a repeated game in fog computing.
Sun, Yan; Zhang, Nan
2017-03-01
With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.
Enhanced delegated computing using coherence
NASA Astrophysics Data System (ADS)
Barz, Stefanie; Dunjko, Vedran; Schlederer, Florian; Moore, Merritt; Kashefi, Elham; Walmsley, Ian A.
2016-03-01
A longstanding question is whether it is possible to delegate computational tasks securely—such that neither the computation nor the data is revealed to the server. Recently, both a classical and a quantum solution to this problem were found [C. Gentry, in Proceedings of the 41st Annual ACM Symposium on the Theory of Computing (Association for Computing Machinery, New York, 2009), pp. 167-178; A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, 2009), pp. 517-526]. Here, we study the first step towards the interplay between classical and quantum approaches and show how coherence can be used as a tool for secure delegated classical computation. We show that a client with limited computational capacity—restricted to an XOR gate—can perform universal classical computation by manipulating information carriers that may occupy superpositions of two states. Using single photonic qubits or coherent light, we experimentally implement secure delegated classical computations between an independent client and a server, which are installed in two different laboratories and separated by 50 m . The server has access to the light sources and measurement devices, whereas the client may use only a restricted set of passive optical devices to manipulate the information-carrying light beams. Thus, our work highlights how minimal quantum and classical resources can be combined and exploited for classical computing.
ERIC Educational Resources Information Center
Liu, Xiufeng
2006-01-01
Based on current theories of chemistry learning, this study intends to test a hypothesis that computer modeling enhanced hands-on chemistry laboratories are more effective than hands-on laboratories or computer modeling laboratories alone in facilitating high school students' understanding of chemistry concepts. Thirty-three high school chemistry…
Ernest Orlando Lawrence Berkeley National Laboratory institutional plan, FY 1996--2001
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-11-01
The FY 1996--2001 Institutional Plan provides an overview of the Ernest Orlando Lawrence Berkeley National Laboratory mission, strategic plan, core business areas, critical success factors, and the resource requirements to fulfill its mission in support of national needs in fundamental science and technology, energy resources, and environmental quality. The Laboratory Strategic Plan section identifies long-range conditions that will influence the Laboratory, as well as potential research trends and management implications. The Core Business Areas section identifies those initiatives that are potential new research programs representing major long-term opportunities for the Laboratory, and the resources required for their implementation. It alsomore » summarizes current programs and potential changes in research program activity, science and technology partnerships, and university and science education. The Critical Success Factors section reviews human resources; work force diversity; environment, safety, and health programs; management practices; site and facility needs; and communications and trust. The Resource Projections are estimates of required budgetary authority for the Laboratory`s ongoing research programs. The Institutional Plan is a management report for integration with the Department of Energy`s strategic planning activities, developed through an annual planning process. The plan identifies technical and administrative directions in the context of the national energy policy and research needs and the Department of Energy`s program planning initiatives. Preparation of the plan is coordinated by the Office of Planning and Communications from information contributed by the Laboratory`s scientific and support divisions.« less
NASA Astrophysics Data System (ADS)
Falkner, Katrina; Vivian, Rebecca
2015-10-01
To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.
Enabling a Scientific Cloud Marketplace: VGL (Invited)
NASA Astrophysics Data System (ADS)
Fraser, R.; Woodcock, R.; Wyborn, L. A.; Vote, J.; Rankine, T.; Cox, S. J.
2013-12-01
The Virtual Geophysics Laboratory (VGL) provides a flexible, web based environment where researchers can browse data and use a variety of scientific software packaged into tool kits that run in the Cloud. Both data and tool kits are published by multiple researchers and registered with the VGL infrastructure forming a data and application marketplace. The VGL provides the basic work flow of Discovery and Access to the disparate data sources and a Library for tool kits and scripting to drive the scientific codes. Computation is then performed on the Research or Commercial Clouds. Provenance information is collected throughout the work flow and can be published alongside the results allowing for experiment comparison and sharing with other researchers. VGL's "mix and match" approach to data, computational resources and scientific codes, enables a dynamic approach to scientific collaboration. VGL allows scientists to publish their specific contribution, be it data, code, compute or work flow, knowing the VGL framework will provide other components needed for a complete application. Other scientists can choose the pieces that suit them best to assemble an experiment. The coarse grain workflow of the VGL framework combined with the flexibility of the scripting library and computational toolkits allows for significant customisation and sharing amongst the community. The VGL utilises the cloud computational and storage resources from the Australian academic research cloud provided by the NeCTAR initiative and a large variety of data accessible from national and state agencies via the Spatial Information Services Stack (SISS - http://siss.auscope.org). VGL v1.2 screenshot - http://vgl.auscope.org
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drugan, C.
2009-12-07
The word 'breakthrough' aptly describes the transformational science and milestones achieved at the Argonne Leadership Computing Facility (ALCF) throughout 2008. The number of research endeavors undertaken at the ALCF through the U.S. Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program grew from 9 in 2007 to 20 in 2008. The allocation of computer time awarded to researchers on the Blue Gene/P also spiked significantly - from nearly 10 million processor hours in 2007 to 111 million in 2008. To support this research, we expanded the capabilities of Intrepid, an IBM Blue Gene/P systemmore » at the ALCF, to 557 teraflops (TF) for production use. Furthermore, we enabled breakthrough levels of productivity and capability in visualization and data analysis with Eureka, a powerful installation of NVIDIA Quadro Plex S4 external graphics processing units. Eureka delivered a quantum leap in visual compute density, providing more than 111 TF and more than 3.2 terabytes of RAM. On April 21, 2008, the dedication of the ALCF realized DOE's vision to bring the power of the Department's high performance computing to open scientific research. In June, the IBM Blue Gene/P supercomputer at the ALCF debuted as the world's fastest for open science and third fastest overall. No question that the science benefited from this growth and system improvement. Four research projects spearheaded by Argonne National Laboratory computer scientists and ALCF users were named to the list of top ten scientific accomplishments supported by DOE's Advanced Scientific Computing Research (ASCR) program. Three of the top ten projects used extensive grants of computing time on the ALCF's Blue Gene/P to model the molecular basis of Parkinson's disease, design proteins at atomic scale, and create enzymes. As the year came to a close, the ALCF was recognized with several prestigious awards at SC08 in November. We provided resources for Linear Scaling Divide-and-Conquer Electronic Structure Calculations for Thousand Atom Nanostructures, a collaborative effort between Argonne, Lawrence Berkeley National Laboratory, and Oak Ridge National Laboratory that received the ACM Gordon Bell Prize Special Award for Algorithmic Innovation. The ALCF also was named a winner in two of the four categories in the HPC Challenge best performance benchmark competition.« less
Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W
2008-05-28
The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.
Toker, Lilah; Rocco, Brad; Sibille, Etienne
2017-01-01
Establishing the molecular diversity of cell types is crucial for the study of the nervous system. We compiled a cross-laboratory database of mouse brain cell type-specific transcriptomes from 36 major cell types from across the mammalian brain using rigorously curated published data from pooled cell type microarray and single-cell RNA-sequencing (RNA-seq) studies. We used these data to identify cell type-specific marker genes, discovering a substantial number of novel markers, many of which we validated using computational and experimental approaches. We further demonstrate that summarized expression of marker gene sets (MGSs) in bulk tissue data can be used to estimate the relative cell type abundance across samples. To facilitate use of this expanding resource, we provide a user-friendly web interface at www.neuroexpresso.org. PMID:29204516
The AMTEX Partnership{trademark}. Fourth quarter FY95 report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-09-01
The AMTEX Partnership{trademark} is a collaborative research and development program among the US Integrated Textile Industry, the Department of Energy (DOE), the national laboratories, other federal agencies and laboratories, and universities. The goal of AMTEX is to strengthen the competitiveness of this vital industry, thereby preserving and creating US jobs. The operations and program management of the AMTEX Partnership{trademark} is provided by the Program Office. This report is produced by the Program Office on a quarterly basis and provides information on the progress, operations, and project management of the partnership. Progress is reported on the following projects: computer-aided fabric evaluation;more » cotton biotechnology; demand activated manufacturing architecture; electronic embedded fingerprints; on-line process control for flexible fiber manufacturing; rapid cutting; sensors for agile manufacturing; and textile resource conservation.« less
Chemical decontamination technical resources at Los Alamos National Laboratory (2008)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Murray E
This document supplies information resources for a person seeking to create planning or pre-planning documents for chemical decontamination operations. A building decontamination plan can be separated into four different sections: Pre-planning, Characterization, Decontamination (Initial response and also complete cleanup), and Clearance. Of the identified Los Alamos resources, they can be matched with these four sections: Pre-planning -- Dave Seidel, EO-EPP, Emergency Planning and Preparedness; David DeCroix and Bruce Letellier, D-3, Computational fluids modeling of structures; Murray E. Moore, RP-2, Aerosol sampling and ventilation engineering. Characterization (this can include development projects) -- Beth Perry, IAT-3, Nuclear Counterterrorism Response (SNIPER database); Fernandomore » Garzon, MPA-11, Sensors and Electrochemical Devices (development); George Havrilla, C-CDE, Chemical Diagnostics and Engineering; Kristen McCabe, B-7, Biosecurity and Public Health. Decontamination -- Adam Stively, EO-ER, Emergency Response; Dina Matz, IHS-IP, Industrial hygiene; Don Hickmott, EES-6, Chemical cleanup. Clearance (validation) -- Larry Ticknor, CCS-6, Statistical Sciences.« less
Building local human resources to implement SLMTA with limited donor funding: The Ghana experience
van der Puije, Beatrice; Bekoe, Veronica; Adukpo, Rowland; Kotey, Nii A.; Yao, Katy; Fonjungo, Peter N.; Luman, Elizabeth T.; Duh, Samuel; Njukeng, Patrick A.; Addo, Nii A.; Khan, Fazle N.; Woodfill, Celia J.I.
2014-01-01
Background In 2009, Ghana adopted the Strengthening Laboratory Management Toward Accreditation (SLMTA) programme in order to improve laboratory quality. The programme was implemented successfully with limited donor funding and local human resources. Objectives To demonstrate how Ghana, which received very limited PEPFAR funding, was able to achieve marked quality improvement using local human resources. Method Local partners led the SLMTA implementation and local mentors were embedded in each laboratory. An in-country training-of-trainers workshop was conducted in order to increase the pool of local SLMTA implementers. Three laboratory cohorts were enrolled in SLMTA in 2011, 2012 and 2013. Participants from each cohort attended in a series of three workshops interspersed with improvement projects and mentorship. Supplemental training on internal audit was provided. Baseline, exit and follow-up audits were conducted using the Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) checklist. In November 2013, four laboratories underwent official SLIPTA audits by the African Society for Laboratory Medicine (ASLM). Results The local SLMTA team successfully implemented three cohorts of SLMTA in 15 laboratories. Seven out of the nine laboratories that underwent follow-up audits have reached at least one star. Three out of the four laboratories that underwent official ASLM audits were awarded four stars. Patient satisfaction increased from 25% to 70% and sample rejection rates decreased from 32% to 10%. On average, $40 000 was spent per laboratory to cover mentors’ salaries, SLMTA training and improvement project support. Conclusion Building in-country capacity through local partners is a sustainable model for improving service quality in resource-constrained countries such as Ghana. Such models promote country ownership, capacity building and the use of local human resources for the expansion of SLMTA. PMID:26937417
Building local human resources to implement SLMTA with limited donor funding: The Ghana experience.
Nkrumah, Bernard; van der Puije, Beatrice; Bekoe, Veronica; Adukpo, Rowland; Kotey, Nii A; Yao, Katy; Fonjungo, Peter N; Luman, Elizabeth T; Duh, Samuel; Njukeng, Patrick A; Addo, Nii A; Khan, Fazle N; Woodfill, Celia J I
2014-11-03
In 2009, Ghana adopted the Strengthening Laboratory Management Toward Accreditation (SLMTA) programme in order to improve laboratory quality. The programme was implemented successfully with limited donor funding and local human resources. To demonstrate how Ghana, which received very limited PEPFAR funding, was able to achieve marked quality improvement using local human resources. Local partners led the SLMTA implementation and local mentors were embedded in each laboratory. An in-country training-of-trainers workshop was conducted in order to increase the pool of local SLMTA implementers. Three laboratory cohorts were enrolled in SLMTA in 2011, 2012 and 2013. Participants from each cohort attended in a series of three workshops interspersed with improvement projects and mentorship. Supplemental training on internal audit was provided. Baseline, exit and follow-up audits were conducted using the Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) checklist. In November 2013, four laboratories underwent official SLIPTA audits by the African Society for Laboratory Medicine (ASLM). The local SLMTA team successfully implemented three cohorts of SLMTA in 15 laboratories. Seven out of the nine laboratories that underwent follow-up audits have reached at least one star. Three out of the four laboratories that underwent official ASLM audits were awarded four stars. Patient satisfaction increased from 25% to 70% and sample rejection rates decreased from 32% to 10%. On average, $40 000 was spent per laboratory to cover mentors' salaries, SLMTA training and improvement project support. Building in-country capacity through local partners is a sustainable model for improving service quality in resource-constrained countries such as Ghana. Such models promote country ownership, capacity building and the use of local human resources for the expansion of SLMTA.
Bailey, Sarah F; Scheible, Melissa K; Williams, Christopher; Silva, Deborah S B S; Hoggan, Marina; Eichman, Christopher; Faith, Seth A
2017-11-01
Next-generation Sequencing (NGS) is a rapidly evolving technology with demonstrated benefits for forensic genetic applications, and the strategies to analyze and manage the massive NGS datasets are currently in development. Here, the computing, data storage, connectivity, and security resources of the Cloud were evaluated as a model for forensic laboratory systems that produce NGS data. A complete front-to-end Cloud system was developed to upload, process, and interpret raw NGS data using a web browser dashboard. The system was extensible, demonstrating analysis capabilities of autosomal and Y-STRs from a variety of NGS instrumentation (Illumina MiniSeq and MiSeq, and Oxford Nanopore MinION). NGS data for STRs were concordant with standard reference materials previously characterized with capillary electrophoresis and Sanger sequencing. The computing power of the Cloud was implemented with on-demand auto-scaling to allow multiple file analysis in tandem. The system was designed to store resulting data in a relational database, amenable to downstream sample interpretations and databasing applications following the most recent guidelines in nomenclature for sequenced alleles. Lastly, a multi-layered Cloud security architecture was tested and showed that industry standards for securing data and computing resources were readily applied to the NGS system without disadvantageous effects for bioinformatic analysis, connectivity or data storage/retrieval. The results of this study demonstrate the feasibility of using Cloud-based systems for secured NGS data analysis, storage, databasing, and multi-user distributed connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
Conventional Microscopy vs. Computer Imagery in Chiropractic Education.
Cunningham, Christine M; Larzelere, Elizabeth D; Arar, Ilija
2008-01-01
As human tissue pathology slides become increasingly difficult to obtain, other methods of teaching microscopy in educational laboratories must be considered. The purpose of this study was to evaluate our students' satisfaction with newly implemented computer imagery based laboratory instruction and to obtain input from their perspective on the advantages and disadvantages of computerized vs. traditional microscope laboratories. This undertaking involved the creation of a new computer laboratory. Robbins and Cotran Pathologic Basis of Disease, 7(th)ed, was chosen as the required text which gave students access to the Robbins Pathology website, including complete content of text, Interactive Case Study Companion, and Virtual Microscope. Students had experience with traditional microscopes in their histology and microbiology laboratory courses. Student satisfaction with computer based learning was assessed using a 28 question survey which was administered to three successive trimesters of pathology students (n=193) using the computer survey website Zoomerang. Answers were given on a scale of 1-5 and statistically analyzed using weighted averages. The survey data indicated that students were satisfied with computer based learning activities during pathology laboratory instruction. The most favorable aspect to computer imagery was 24-7 availability (weighted avg. 4.16), followed by clarification offered by accompanying text and captions (weighted avg. 4.08). Although advantages and disadvantages exist in using conventional microscopy and computer imagery, current pathology teaching environments warrant investigation of replacing traditional microscope exercises with computer applications. Chiropractic students supported the adoption of computer-assisted instruction in pathology laboratories.
Provider-Independent Use of the Cloud
NASA Astrophysics Data System (ADS)
Harmer, Terence; Wright, Peter; Cunningham, Christina; Perrott, Ron
Utility computing offers researchers and businesses the potential of significant cost-savings, making it possible for them to match the cost of their computing and storage to their demand for such resources. A utility compute provider enables the purchase of compute infrastructures on-demand; when a user requires computing resources a provider will provision a resource for them and charge them only for their period of use of that resource. There has been a significant growth in the number of cloud computing resource providers and each has a different resource usage model, application process and application programming interface (API)-developing generic multi-resource provider applications is thus difficult and time consuming. We have developed an abstraction layer that provides a single resource usage model, user authentication model and API for compute providers that enables cloud-provider neutral applications to be developed. In this paper we outline the issues in using external resource providers, give examples of using a number of the most popular cloud providers and provide examples of developing provider neutral applications. In addition, we discuss the development of the API to create a generic provisioning model based on a common architecture for cloud computing providers.
dV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron
This report introduces publications that report the results of a project that aimed to design a computational framework that enables computational experimentation at scale while supporting the model of “submit locally, compute globally”. The project focuses on estimating application resource needs, finding the appropriate computing resources, acquiring those resources,deploying the applications and data on the resources, managing applications and resources during run.
Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.
2008-01-01
The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu. PMID:18509477
TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling
NASA Astrophysics Data System (ADS)
Nelson, J.; Jones, N.; Ames, D. P.
2015-12-01
Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.
Real Time Data in Synoptic Meteolorolgy and Weather Forecasting Education
NASA Astrophysics Data System (ADS)
Campetella, C. M.; Gassmann, M. I.
2006-05-01
The Department of Atmospheric and Oceanographic Sciences (DAOS) of the University of Buenos Aires is the university component of the World Meteorological Organization (WMO) Regional Meteorological Training Center (RMTC) in Region III. In January, 2002 our RMTC was invited to take part in the MeteoForum pilot project that was developed jointly by the COMET and Unidata programs of the University Corporation for Atmospheric Research (UCAR). MeteoForum comprises an international network of WMO Region III and IV RMTCs working collaboratively with universities to enhance their roles of training and education through information technologies and multilingual collections of resources. The DAOS undertook to improve its infrastructure to be able to access hydro-meteorological information in real-time as part of the Unidata community. In 2003, the DAOS received some Unidata equipment grant funds to update its computer infrastructure, improving communications with an operationally quicker system. Departmental networking was upgraded to 100 Mb/s capability while, at the same time, new computation resources were purchased that increased the number of computers available for student use from 5 to 8. This upgrade has also resulted in more and better computers being available for student and faculty research. A video projection system, purchased with funds provided by the COMET program as part of Meteoforum, is used in classrooms with Internet connections for a variety of educational activities. The upgraded computing and networking facilities have contributed to the development of educational modules using real-time hydro-meteorological and other digital data for the classroom. With the aid of Unidata personal, the Unidata Local Data Management (LDM) software was installed and configured to request and process real-time feeds of global observational data; global numerical model output from the US National Centers for Environmental Prediction (NCEP) models; and all imager channels from GOES-12 from the Unidata Internet Data Distribution (IDD) system. The data now being routinely received have impacted not only the meteorological education in the DAOS, but also have been instructive in techniques for Internet-based data sharing for our students. The DAOS has made a substantial effort to provide undergraduate students with experience in manipulating, displaying, and analyzing weather data in real-time through interactive displays of data using visualization tools provided by Unidata. Two of the specific courses whose curriculum have been improved are synoptic meteorology and a laboratory on weather prediction. Some laboratory materials have been developed to reflect current data as applied to the lecture material. This talk will briefly describe the data compiled and the fields used to analyze an intense cyclogenesis event that occurred over the La Plata River in August, 2005. This event was used as a case study for discussions in the Synoptic Weather Laboratory degree course of Atmospheric Sciences Licentiate.
Implementation of a World Wide Web server for the oil and gas industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaylock, R.E.; Martin, F.D.; Emery, R.
1995-12-31
The Gas and Oil Technology Exchange and Communication Highway, (GO-TECH), provides an electronic information system for the petroleum community for the purpose of exchanging ideas, data, and technology. The personal computer-based system fosters communication and discussion by linking oil and gas producers with resource centers, government agencies, consulting firms, service companies, national laboratories, academic research groups, and universities throughout the world. The oil and gas producers are provided access to the GO-TECH World Wide Web home page via modem links, as well as Internet. The future GO-TECH applications will include the establishment of{open_quote}Virtual corporations {close_quotes} consisting of consortiums of smallmore » companies, consultants, and service companies linked by electronic information systems. These virtual corporations will have the resources and expertise previously found only in major corporations.« less
A Responsive Client for Distributed Visualization
NASA Astrophysics Data System (ADS)
Bollig, E. F.; Jensen, P. A.; Erlebacher, G.; Yuen, D. A.; Momsen, A. R.
2006-12-01
As grids, web services and distributed computing continue to gain popularity in the scientific community, demand for virtual laboratories likewise increases. Today organizations such as the Virtual Laboratory for Earth and Planetary Sciences (VLab) are dedicated to developing web-based portals to perform various simulations remotely while abstracting away details of the underlying computation. Two of the biggest challenges in portal- based computing are fast visualization and smooth interrogation without over taxing clients resources. In response to this challenge, we have expanded on our previous data storage strategy and thick client visualization scheme [1] to develop a client-centric distributed application that utilizes remote visualization of large datasets and makes use of the local graphics processor for improved interactivity. Rather than waste precious client resources for visualization, a combination of 3D graphics and 2D server bitmaps are used to simulate the look and feel of local rendering. Java Web Start and Java Bindings for OpenGL enable install-on- demand functionality as well as low level access to client graphics for all platforms. Powerful visualization services based on VTK and auto-generated by the WATT compiler [2] are accessible through a standard web API. Data is permanently stored on compute nodes while separate visualization nodes fetch data requested by clients, caching it locally to prevent unnecessary transfers. We will demonstrate application capabilities in the context of simulated charge density visualization within the VLab portal. In addition, we will address generalizations of our application to interact with a wider number of WATT services and performance bottlenecks. [1] Ananthuni, R., Karki, B.B., Bollig, E.F., da Silva, C.R.S., Erlebacher, G., "A Web-Based Visualization and Reposition Scheme for Scientific Data," In Press, Proceedings of the 2006 International Conference on Modeling Simulation and Visualization Methods (MSV'06) (2006). [2] Jensen, P.A., Yuen, D.A., Erlebacher, G., Bollig, E.F., Kigelman, D.G., Shukh, E.A., Automated Generation of Web Services for Visualization Toolkits, Eos Trans. AGU, 86(52), Fall Meet. Suppl., Abstract IN42A-06, 2005.
Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility
NASA Astrophysics Data System (ADS)
Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.
2014-12-01
The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and to reduce the total volume of data communicated. Use of Titan has enabled ECMWF to plan future scalability developments and resource requirements. We will also discuss the best practices developed over the years in navigating logistical, legal and regulatory hurdles involved in supporting the facility's diverse user community.
30 CFR 6.10 - Use of independent laboratories.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PRODUCT SAFETY STANDARDS § 6.10 Use of independent laboratories. (a) MSHA will accept testing and... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of independent laboratories. 6.10 Section 6.10 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION...
30 CFR 14.21 - Laboratory-scale flame test apparatus.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Laboratory-scale flame test apparatus. 14.21 Section 14.21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING... Technical Requirements § 14.21 Laboratory-scale flame test apparatus. The principal parts of the apparatus...
30 CFR 14.21 - Laboratory-scale flame test apparatus.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Laboratory-scale flame test apparatus. 14.21 Section 14.21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING... Technical Requirements § 14.21 Laboratory-scale flame test apparatus. The principal parts of the apparatus...
Ammonia Oxidation by Abstraction of Three Hydrogen Atoms from a Mo–NH 3 Complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharya, Papri; Heiden, Zachariah M.; Wiedner, Eric S.
We report ammonia oxidation by homolytic cleavage of all three H atoms from a Mo-15NH3 complex using the 2,4,6-tri-tert-butylphenoxyl radical to afford a Mo-alkylimido (Mo=15NR) complex (R = 2,4,6-tri-t-butylcyclohexa-2,5-dien-1-one). Reductive cleavage of Mo=15NR generates a terminal Mo≡N nitride, and a [Mo-15NH]+ complex is formed by protonation. Computational analysis describes the energetic profile for the stepwise removal of three H atoms from the Mo-15NH3 complex and the formation of Mo=15NR. Acknowledgment. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Re-search Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Officemore » of Basic Energy Sciences. EPR and mass spectrometry experiments were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located at PNNL. The authors thank Dr. Eric D. Walter and Dr. Rosalie Chu for assistance in performing EPR and mass spectroscopy analysis, respectively. Computational resources provided by the National Energy Re-search Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific North-west National Laboratory is operated by Battelle for the U.S. DOE.« less
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1992-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sobolik, Steven R.; Hadgu, Teklu; Rechard, Robert P.
The Bureau of Land Management (BLM), US Department of the Interior has asked Sandia National Laboratories (SNL) to perform scientific studies relevant to technical issues that arise in the development of co-located resources of potash and petroleum in southeastern New Mexico in the Secretary’s Potash Area. The BLM manages resource development, issues permits and interacts with the State of New Mexico in the process of developing regulations, in an environment where many issues are disputed by industry stakeholders. The present report is a deliverable of the study of the potential for gas migration from a wellbore to a mine openingmore » in the event of wellbore leakage, a risk scenario about which there is disagreement among stakeholders and little previous site specific analysis. One goal of this study was to develop a framework that required collaboratively developed inputs and analytical approaches in order to encourage stakeholder participation and to employ ranges of data values and scenarios. SNL presents here a description of a basic risk assessment (RA) framework that will fulfill the initial steps of meeting that goal. SNL used the gas migration problem to set up example conceptual models, parameter sets and computer models and as a foundation for future development of RA to support BLM resource development.« less
Yeh, Kenneth B; Adams, Martin; Stamper, Paul D; Dasgupta, Debanjana; Hewson, Roger; Buck, Charles D; Richards, Allen L; Hay, John
2016-01-01
Strategic laboratory planning in limited resource areas is essential for addressing global health security issues. Establishing a national reference laboratory, especially one with BSL-3 or -4 biocontainment facilities, requires a heavy investment of resources, a multisectoral approach, and commitments from multiple stakeholders. We make the case for donor organizations and recipient partners to develop a comprehensive laboratory operations roadmap that addresses factors such as mission and roles, engaging national and political support, securing financial support, defining stakeholder involvement, fostering partnerships, and building trust. Successful development occurred with projects in African countries and in Azerbaijan, where strong leadership and a clear management framework have been key to success. A clearly identified and agreed management framework facilitate identifying the responsibility for developing laboratory capabilities and support services, including biosafety and biosecurity, quality assurance, equipment maintenance, supply chain establishment, staff certification and training, retention of human resources, and sustainable operating revenue. These capabilities and support services pose rate-limiting yet necessary challenges. Laboratory capabilities depend on mission and role, as determined by all stakeholders, and demonstrate the need for relevant metrics to monitor the success of the laboratory, including support for internal and external audits. Our analysis concludes that alternative frameworks for success exist for developing and implementing capabilities at regional and national levels in limited resource areas. Thus, achieving a balance for standardizing practices between local procedures and accepted international standards is a prerequisite for integrating new facilities into a country's existing public health infrastructure and into the overall international scientific community.
Idaho National Laboratory Cultural Resource Management Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowrey, Diana Lee
2009-02-01
As a federal agency, the U.S. Department of Energy has been directed by Congress, the U.S. president, and the American public to provide leadership in the preservation of prehistoric, historic, and other cultural resources on the lands it administers. This mandate to preserve cultural resources in a spirit of stewardship for the future is outlined in various federal preservation laws, regulations, and guidelines such as the National Historic Preservation Act, the Archaeological Resources Protection Act, and the National Environmental Policy Act. The purpose of this Cultural Resource Management Plan is to describe how the Department of Energy, Idaho Operations Officemore » will meet these responsibilities at the Idaho National Laboratory. This Laboratory, which is located in southeastern Idaho, is home to a wide variety of important cultural resources representing at least 13,500 years of human occupation in the southeastern Idaho area. These resources are nonrenewable; bear valuable physical and intangible legacies; and yield important information about the past, present, and perhaps the future. There are special challenges associated with balancing the preservation of these sites with the management and ongoing operation of an active scientific laboratory. The Department of Energy, Idaho Operations Office is committed to a cultural resource management program that accepts these challenges in a manner reflecting both the spirit and intent of the legislative mandates. This document is designed for multiple uses and is intended to be flexible and responsive to future changes in law or mission. Document flexibility and responsiveness will be assured through annual reviews and as-needed updates. Document content includes summaries of Laboratory cultural resource philosophy and overall Department of Energy policy; brief contextual overviews of Laboratory missions, environment, and cultural history; and an overview of cultural resource management practices. A series of appendices provides important details that support the main text.« less
Idaho National Laboratory Cultural Resource Management Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowrey, Diana Lee
As a federal agency, the U.S. Department of Energy has been directed by Congress, the U.S. president, and the American public to provide leadership in the preservation of prehistoric, historic, and other cultural resources on the lands it administers. This mandate to preserve cultural resources in a spirit of stewardship for the future is outlined in various federal preservation laws, regulations, and guidelines such as the National Historic Preservation Act, the Archaeological Resources Protection Act, and the National Environmental Policy Act. The purpose of this Cultural Resource Management Plan is to describe how the Department of Energy, Idaho Operations Officemore » will meet these responsibilities at the Idaho National Laboratory. This Laboratory, which is located in southeastern Idaho, is home to a wide variety of important cultural resources representing at least 13,500 years of human occupation in the southeastern Idaho area. These resources are nonrenewable; bear valuable physical and intangible legacies; and yield important information about the past, present, and perhaps the future. There are special challenges associated with balancing the preservation of these sites with the management and ongoing operation of an active scientific laboratory. The Department of Energy, Idaho Operations Office is committed to a cultural resource management program that accepts these challenges in a manner reflecting both the spirit and intent of the legislative mandates. This document is designed for multiple uses and is intended to be flexible and responsive to future changes in law or mission. Document flexibility and responsiveness will be assured through annual reviews and as-needed updates. Document content includes summaries of Laboratory cultural resource philosophy and overall Department of Energy policy; brief contextual overviews of Laboratory missions, environment, and cultural history; and an overview of cultural resource management practices. A series of appendices provides important details that support the main text.« less
BioVeL: a virtual laboratory for data analysis and modelling in biodiversity science and ecology.
Hardisty, Alex R; Bacall, Finn; Beard, Niall; Balcázar-Vargas, Maria-Paula; Balech, Bachir; Barcza, Zoltán; Bourlat, Sarah J; De Giovanni, Renato; de Jong, Yde; De Leo, Francesca; Dobor, Laura; Donvito, Giacinto; Fellows, Donal; Guerra, Antonio Fernandez; Ferreira, Nuno; Fetyukova, Yuliya; Fosso, Bruno; Giddy, Jonathan; Goble, Carole; Güntsch, Anton; Haines, Robert; Ernst, Vera Hernández; Hettling, Hannes; Hidy, Dóra; Horváth, Ferenc; Ittzés, Dóra; Ittzés, Péter; Jones, Andrew; Kottmann, Renzo; Kulawik, Robert; Leidenberger, Sonja; Lyytikäinen-Saarenmaa, Päivi; Mathew, Cherian; Morrison, Norman; Nenadic, Aleksandra; de la Hidalga, Abraham Nieva; Obst, Matthias; Oostermeijer, Gerard; Paymal, Elisabeth; Pesole, Graziano; Pinto, Salvatore; Poigné, Axel; Fernandez, Francisco Quevedo; Santamaria, Monica; Saarenmaa, Hannu; Sipos, Gergely; Sylla, Karl-Heinz; Tähtinen, Marko; Vicario, Saverio; Vos, Rutger Aldo; Williams, Alan R; Yilmaz, Pelin
2016-10-20
Making forecasts about biodiversity and giving support to policy relies increasingly on large collections of data held electronically, and on substantial computational capability and capacity to analyse, model, simulate and predict using such data. However, the physically distributed nature of data resources and of expertise in advanced analytical tools creates many challenges for the modern scientist. Across the wider biological sciences, presenting such capabilities on the Internet (as "Web services") and using scientific workflow systems to compose them for particular tasks is a practical way to carry out robust "in silico" science. However, use of this approach in biodiversity science and ecology has thus far been quite limited. BioVeL is a virtual laboratory for data analysis and modelling in biodiversity science and ecology, freely accessible via the Internet. BioVeL includes functions for accessing and analysing data through curated Web services; for performing complex in silico analysis through exposure of R programs, workflows, and batch processing functions; for on-line collaboration through sharing of workflows and workflow runs; for experiment documentation through reproducibility and repeatability; and for computational support via seamless connections to supporting computing infrastructures. We developed and improved more than 60 Web services with significant potential in many different kinds of data analysis and modelling tasks. We composed reusable workflows using these Web services, also incorporating R programs. Deploying these tools into an easy-to-use and accessible 'virtual laboratory', free via the Internet, we applied the workflows in several diverse case studies. We opened the virtual laboratory for public use and through a programme of external engagement we actively encouraged scientists and third party application and tool developers to try out the services and contribute to the activity. Our work shows we can deliver an operational, scalable and flexible Internet-based virtual laboratory to meet new demands for data processing and analysis in biodiversity science and ecology. In particular, we have successfully integrated existing and popular tools and practices from different scientific disciplines to be used in biodiversity and ecological research.
A Software Laboratory Environment for Computer-Based Problem Solving.
ERIC Educational Resources Information Center
Kurtz, Barry L.; O'Neal, Micheal B.
This paper describes a National Science Foundation-sponsored project at Louisiana Technological University to develop computer-based laboratories for "hands-on" introductions to major topics of computer science. The underlying strategy is to develop structured laboratory environments that present abstract concepts through the use of…
Laboratory Directed Research and Development FY2010 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, K J
2011-03-22
A premier applied-science laboratory, Lawrence Livermore National Laboratory (LLNL) has at its core a primary national security mission - to ensure the safety, security, and reliability of the nation's nuclear weapons stockpile without nuclear testing, and to prevent and counter the spread and use of weapons of mass destruction: nuclear, chemical, and biological. The Laboratory uses the scientific and engineering expertise and facilities developed for its primary mission to pursue advanced technologies to meet other important national security needs - homeland defense, military operations, and missile defense, for example - that evolve in response to emerging threats. For broader nationalmore » needs, LLNL executes programs in energy security, climate change and long-term energy needs, environmental assessment and management, bioscience and technology to improve human health, and for breakthroughs in fundamental science and technology. With this multidisciplinary expertise, the Laboratory serves as a science and technology resource to the U.S. government and as a partner with industry and academia. This annual report discusses the following topics: (1) Advanced Sensors and Instrumentation; (2) Biological Sciences; (3) Chemistry; (4) Earth and Space Sciences; (5) Energy Supply and Use; (6) Engineering and Manufacturing Processes; (7) Materials Science and Technology; Mathematics and Computing Science; (8) Nuclear Science and Engineering; and (9) Physics.« less
NASA Astrophysics Data System (ADS)
Onuoha, Cajetan O.
The purpose of this research study was to determine the overall effectiveness of computer-based laboratory compared with the traditional hands-on laboratory for improving students' science academic achievement and attitudes towards science subjects at the college and pre-college levels of education in the United States. Meta-analysis was used to synthesis the findings from 38 primary research studies conducted and/or reported in the United States between 1996 and 2006 that compared the effectiveness of computer-based laboratory with the traditional hands-on laboratory on measures related to science academic achievements and attitudes towards science subjects. The 38 primary research studies, with total subjects of 3,824 generated a total of 67 weighted individual effect sizes that were used in this meta-analysis. The study found that computer-based laboratory had small positive effect sizes over the traditional hands-on laboratory (ES = +0.26) on measures related to students' science academic achievements and attitudes towards science subjects (ES = +0.22). It was also found that computer-based laboratory produced more significant effects on physical science subjects compared to biological sciences (ES = +0.34, +0.17).
NASA Technical Reports Server (NTRS)
Thomas, V. C.
1986-01-01
A Vibroacoustic Data Base Management Center has been established at the Jet Propulsion Laboratory (JPL). The center utilizes the Vibroacoustic Payload Environment Prediction System (VAPEPS) software package to manage a data base of shuttle and expendable launch vehicle flight and ground test data. Remote terminal access over telephone lines to a dedicated VAPEPS computer system has been established to provide the payload community a convenient means of querying the global VAPEPS data base. This guide describes the functions of the JPL Data Base Management Center and contains instructions for utilizing the resources of the center.
Investigations in Computer-Aided Instruction and Computer-Aided Controls. Final Report.
ERIC Educational Resources Information Center
Rosenberg, R.C.; And Others
These research projects, designed to delve into certain relationships between humans and computers, are focused on computer-assisted instruction and on man-computer interaction. One study demonstrates that within the limits of formal engineering theory, a computer simulated laboratory (Dynamic Systems Laboratory) can be built in which freshmen…
Implementation of quality management for clinical bacteriology in low-resource settings.
Barbé, B; Yansouni, C P; Affolabi, D; Jacobs, J
2017-07-01
The declining trend of malaria and the recent prioritization of containment of antimicrobial resistance have created a momentum to implement clinical bacteriology in low-resource settings. Successful implementation relies on guidance by a quality management system (QMS). Over the past decade international initiatives were launched towards implementation of QMS in HIV/AIDS, tuberculosis and malaria. To describe the progress towards accreditation of medical laboratories and to identify the challenges and best practices for implementation of QMS in clinical bacteriology in low-resource settings. Published literature, online reports and websites related to the implementation of laboratory QMS, accreditation of medical laboratories and initiatives for containment of antimicrobial resistance. Apart from the limitations of infrastructure, equipment, consumables and staff, QMS are challenged with the complexity of clinical bacteriology and the healthcare context in low-resource settings (small-scale laboratories, attitudes and perception of staff, absence of laboratory information systems). Likewise, most international initiatives addressing laboratory health strengthening have focused on public health and outbreak management rather than on hospital based patient care. Best practices to implement quality-assured clinical bacteriology in low-resource settings include alignment with national regulations and public health reference laboratories, participating in external quality assurance programmes, support from the hospital's management, starting with attainable projects, conducting error review and daily bench-side supervision, looking for locally adapted solutions, stimulating ownership and extending existing training programmes to clinical bacteriology. The implementation of QMS in clinical bacteriology in hospital settings will ultimately boost a culture of quality to all sectors of healthcare in low-resource settings. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Determination of Absolute Zero Using a Computer-Based Laboratory
ERIC Educational Resources Information Center
Amrani, D.
2007-01-01
We present a simple computer-based laboratory experiment for evaluating absolute zero in degrees Celsius, which can be performed in college and undergraduate physical sciences laboratory courses. With a computer, absolute zero apparatus can help demonstrators or students to observe the relationship between temperature and pressure and use…
Institute for scientific computing research;fiscal year 1999 annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keyes, D
2000-03-28
Large-scale scientific computation, and all of the disciplines that support it and help to validate it, have been placed at the focus of Lawrence Livermore National Laboratory by the Accelerated Strategic Computing Initiative (ASCI). The Laboratory operates the computer with the highest peak performance in the world and has undertaken some of the largest and most compute-intensive simulations ever performed. Computers at the architectural extremes, however, are notoriously difficult to use efficiently. Even such successes as the Laboratory's two Bell Prizes awarded in November 1999 only emphasize the need for much better ways of interacting with the results of large-scalemore » simulations. Advances in scientific computing research have, therefore, never been more vital to the core missions of the Laboratory than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, the Laboratory must engage researchers at many academic centers of excellence. In FY 1999, the Institute for Scientific Computing Research (ISCR) has expanded the Laboratory's bridge to the academic community in the form of collaborative subcontracts, visiting faculty, student internships, a workshop, and a very active seminar series. ISCR research participants are integrated almost seamlessly with the Laboratory's Center for Applied Scientific Computing (CASC), which, in turn, addresses computational challenges arising throughout the Laboratory. Administratively, the ISCR flourishes under the Laboratory's University Relations Program (URP). Together with the other four Institutes of the URP, it must navigate a course that allows the Laboratory to benefit from academic exchanges while preserving national security. Although FY 1999 brought more than its share of challenges to the operation of an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and well worth the continued effort. A change of administration for the ISCR occurred during FY 1999. Acting Director John Fitzgerald retired from LLNL in August after 35 years of service, including the last two at helm of the ISCR. David Keyes, who has been a regular visitor in conjunction with ASCI scalable algorithms research since October 1997, overlapped with John for three months and serves half-time as the new Acting Director.« less
Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources
NASA Astrophysics Data System (ADS)
Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.
2011-12-01
Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.
Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.
Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William
2017-01-01
Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.
Statistics Online Computational Resource for Education
ERIC Educational Resources Information Center
Dinov, Ivo D.; Christou, Nicolas
2009-01-01
The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)
Laboratory challenges conducting international clinical research in resource-limited settings.
Fitzgibbon, Joseph E; Wallis, Carole L
2014-01-01
There are many challenges to performing clinical research in resource-limited settings. Here, we discuss several of the most common laboratory issues that must be addressed. These include issues relating to organization and personnel, laboratory facilities and equipment, standard operating procedures, external quality assurance, shipping, laboratory capacity, and data management. Although much progress has been made, innovative ways of addressing some of these issues are still very much needed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-20
... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] In the Matter of: BP International, Inc., CyGene Laboratories, Inc., Delek Resources, Inc., Flooring America, Inc., International Diversified... there is a lack of current and accurate information concerning the securities of CyGene Laboratories...
Environmental Resource Management Issues in Agronomy: A Lecture/Laboratory Course
ERIC Educational Resources Information Center
Munn, D. A.
2004-01-01
Environmental Sciences Technology T272 is a course with a laboratory addressing problems in soil and water quality and organic wastes utilization to serve students from associate degree programs in laboratory science and environmental resources management at a 2-year technical college. Goals are to build basic lab skills and understand the role…
An Architecture for Cross-Cloud System Management
NASA Astrophysics Data System (ADS)
Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad
The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.
Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious
ERIC Educational Resources Information Center
Cirasella, Jill
2009-01-01
This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil,Benny Manuel; Ballance, Robert; Haskell, Karen
Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model ismore » focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.
2008-05-04
This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less
Papež, Václav; Denaxas, Spiros; Hemingway, Harry
2017-01-01
Electronic Health Records are electronic data generated during or as a byproduct of routine patient care. Structured, semi-structured and unstructured EHR offer researchers unprecedented phenotypic breadth and depth and have the potential to accelerate the development of precision medicine approaches at scale. A main EHR use-case is defining phenotyping algorithms that identify disease status, onset and severity. Phenotyping algorithms utilize diagnoses, prescriptions, laboratory tests, symptoms and other elements in order to identify patients with or without a specific trait. No common standardized, structured, computable format exists for storing phenotyping algorithms. The majority of algorithms are stored as human-readable descriptive text documents making their translation to code challenging due to their inherent complexity and hinders their sharing and re-use across the community. In this paper, we evaluate the two key Semantic Web Technologies, the Web Ontology Language and the Resource Description Framework, for enabling computable representations of EHR-driven phenotyping algorithms.
MIGS-GPU: Microarray Image Gridding and Segmentation on the GPU.
Katsigiannis, Stamos; Zacharia, Eleni; Maroulis, Dimitris
2017-05-01
Complementary DNA (cDNA) microarray is a powerful tool for simultaneously studying the expression level of thousands of genes. Nevertheless, the analysis of microarray images remains an arduous and challenging task due to the poor quality of the images that often suffer from noise, artifacts, and uneven background. In this study, the MIGS-GPU [Microarray Image Gridding and Segmentation on Graphics Processing Unit (GPU)] software for gridding and segmenting microarray images is presented. MIGS-GPU's computations are performed on the GPU by means of the compute unified device architecture (CUDA) in order to achieve fast performance and increase the utilization of available system resources. Evaluation on both real and synthetic cDNA microarray images showed that MIGS-GPU provides better performance than state-of-the-art alternatives, while the proposed GPU implementation achieves significantly lower computational times compared to the respective CPU approaches. Consequently, MIGS-GPU can be an advantageous and useful tool for biomedical laboratories, offering a user-friendly interface that requires minimum input in order to run.
Bringing Federated Identity to Grid Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teheran, Jeny
The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access formore » users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
A video on computer security is described. Lonnie Moore, the Computer Security Manager, CSSM/CPPM at Lawrence Livermore National Laboratory (LLNL) and Gale Warshawsky, the Coordinator for Computer Security Education and Awareness at LLNL, wanted to share topics such as computer ethics, software piracy, privacy issues, and protecting information in a format that would capture and hold an audience`s attention. Four Computer Security Short Subject videos were produced which ranged from 1--3 minutes each. These videos are very effective education and awareness tools that can be used to generate discussions about computer security concerns and good computing practices.
Robotic vehicles for planetary exploration
NASA Astrophysics Data System (ADS)
Wilcox, Brian; Matthies, Larry; Gennery, Donald; Cooper, Brian; Nguyen, Tam; Litwin, Todd; Mishkin, Andrew; Stone, Henry
A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of the National Aeronautics and Space Administration. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets.
Robotic vehicles for planetary exploration
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Matthies, Larry; Gennery, Donald; Cooper, Brian; Nguyen, Tam; Litwin, Todd; Mishkin, Andrew; Stone, Henry
1992-01-01
A program to develop planetary rover technology is underway at the Jet Propulsion Laboratory (JPL) under sponsorship of the National Aeronautics and Space Administration. Developmental systems with the necessary sensing, computing, power, and mobility resources to demonstrate realistic forms of control for various missions have been developed, and initial testing has been completed. These testbed systems and the associated navigation techniques used are described. Particular emphasis is placed on three technologies: Computer-Aided Remote Driving (CARD), Semiautonomous Navigation (SAN), and behavior control. It is concluded that, through the development and evaluation of such technologies, research at JPL has expanded the set of viable planetary rover mission possibilities beyond the limits of remotely teleoperated systems such as Lunakhod. These are potentially applicable to exploration of all the solid planetary surfaces in the solar system, including Mars, Venus, and the moons of the gas giant planets.
First principles statistical mechanics of alloys and magnetism
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Khan, Suffian N.; Li, Ying Wai
Modern high performance computing resources are enabling the exploration of the statistical physics of phase spaces with increasing size and higher fidelity of the Hamiltonian of the systems. For selected systems, this now allows the combination of Density Functional based first principles calculations with classical Monte Carlo methods for parameter free, predictive thermodynamics of materials. We combine our locally selfconsistent real space multiple scattering method for solving the Kohn-Sham equation with Wang-Landau Monte-Carlo calculations (WL-LSMS). In the past we have applied this method to the calculation of Curie temperatures in magnetic materials. Here we will present direct calculations of the chemical order - disorder transitions in alloys. We present our calculated transition temperature for the chemical ordering in CuZn and the temperature dependence of the short-range order parameter and specific heat. Finally we will present the extension of the WL-LSMS method to magnetic alloys, thus allowing the investigation of the interplay of magnetism, structure and chemical order in ferrous alloys. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.
First-Principles Study of Superconductivity in Ultra- thin Pb Films
NASA Astrophysics Data System (ADS)
Noffsinger, Jesse; Cohen, Marvin L.
2010-03-01
Recently, superconductivity in ultrathin layered Pb has been confirmed in samples with as few as two atomic layers [S. Qin, J. Kim, Q. Niu, and C.-K. Shih, Science 2009]. Interestingly, the prototypical strong-coupling superconductor exhibits different Tc's for differing surface reconstructions in samples with only two monolayers. Additionally, Tc is seen to oscillate as the number of atomic layers is increased. Using first principles techniques based on Wannier functions, we analyze the electronic structure, lattice dynamics and electron-phonon coupling for varying thicknesses and surface reconstructions of layered Pb. We discuss results as they relate to superconductivity in the bulk, for which accurate calculations of superconducting properties can be compared to experiment [W. L. McMillan and J.M. Rowell, PRL 1965]. This work was supported by National Science Foundation Grant No. DMR07-05941, the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Computational resources have been provided by the Lawrencium computational cluster resource provided by the IT Division at the Lawrence Berkeley National Laboratory (Supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231)
Performance Assessment Institute-NV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lombardo, Joesph
2012-12-31
The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical nationalmore » and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.« less
Distributed Energy Resource (DER) Cybersecurity Standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleem, Danish; Johnson, Jay
This presentation covers the work that Sandia National Laboratories and National Renewable Energy Laboratory are doing for distributed energy resource cybersecurity standards, prepared for NREL's Annual Cybersecurity & Resilience Workshop on October 9-10, 2017.
Health care information infrastructure: what will it be and how will we get there?
NASA Astrophysics Data System (ADS)
Kun, Luis G.
1996-02-01
During the first Health Care Technology Policy [HCTPI conference last year, during Health Care Reform, four major issues were brought up in regards to the underway efforts to develop a Computer Based Patient Record (CBPR)I the National Information Infrastructure (NIl) as part of the High Performance Computers & Communications (HPCC), and the so-called "Patient Card" . More specifically it was explained how a national information system will greatly affect the way health care delivery is provided to the United States public and reduce its costs. These four issues were: Constructing a National Information Infrastructure (NIl); Building a Computer Based Patient Record System; Bringing the collective resources of our National Laboratories to bear in developing and implementing the NIl and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; Utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues. During the second HCTP conference, in mid 1 995, a section of this meeting entitled: "Health Care Technology Assets of the Federal Government" addressed benefits of the technology transfer which should occur for maximizing already developed resources. Also a section entitled:"Transfer and Utilization of Government Technology Assets to the Private Sector", looked at both Health Care and non-Health Care related technologies since many areas such as Information Technologies (i.e. imaging, communications, archival I retrieval, systems integration, information display, multimedia, heterogeneous data bases, etc.) already exist and are part of our National Labs and/or other federal agencies, i.e. ARPA. These technologies although they are not labeled under "Health Care" programs they could provide enormous value to address technical needs. An additional issue deals with both the technical (hardware, software) and human expertise that resides within these labs and their possible role in creating cost effective solutions.
NASA Astrophysics Data System (ADS)
Kun, Luis G.
1995-10-01
During the first Health Care Technology Policy conference last year, during health care reform, four major issues were brought up in regards to the efforts underway to develop a computer based patient record (CBPR), the National Information Infrastructure (NII) as part of the high performance computers and communications (HPCC), and the so-called 'patient card.' More specifically it was explained how a national information system will greatly affect the way health care delivery is provided to the United States public and reduce its costs. These four issues were: (1) Constructing a national information infrastructure (NII); (2) Building a computer based patient record system; (3) Bringing the collective resources of our national laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; (4) Utilizing government (e.g., DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs, and accelerate technology transfer to address health care issues. This year a section of this conference entitled: 'Health Care Technology Assets of the Federal Government' addresses benefits of the technology transfer which should occur for maximizing already developed resources. This section entitled: 'Transfer and Utilization of Government Technology Assets to the Private Sector,' will look at both health care and non-health care related technologies since many areas such as information technologies (i.e. imaging, communications, archival/retrieval, systems integration, information display, multimedia, heterogeneous data bases, etc.) already exist and are part of our national labs and/or other federal agencies, i.e., ARPA. These technologies although they are not labeled under health care programs they could provide enormous value to address technical needs. An additional issue deals with both the technical (hardware, software) and human expertise that resides within these labs and their possible role in creating cost effective solutions.
Resolving Controversies Concerning the Kinetic Structure of Multi-Ion Plasma Shocks
NASA Astrophysics Data System (ADS)
Keenan, Brett; Simakov, Andrei; Chacon, Luis; Taitano, William
2017-10-01
Strong collisional shocks in multi-ion plasmas are featured in several high-energy-density environments, including Inertial Confinement Fusion (ICF) implosions. Yet, basic structural features of these shocks remain poorly understood (e.g., the shock width's dependence on the Mach number and the plasma ion composition, and temperature decoupling between ion species), causing controversies in the literature; even for stationary shocks in planar geometry [cf., Ref. and Ref.]. Using a LANL-developed, high-fidelity, 1D-2V Vlasov-Fokker-Planck code (iFP), as well as direct comparisons to multi-ion hydrodynamic simulations and semi-analytic predictions, we critically examine steady-state, planar shocks in two-ion species plasmas and put forward resolutions to these controversies. This work was supported by the Los Alamos National Laboratory LDRD Program, Metropolis Postdoctoral Fellowship for W.T.T., and used resources provided by the Los Alamos National Laboratory Institutional Computing Program.
A Comparison of the Apple Macintosh and IBM PC in Laboratory Applications.
ERIC Educational Resources Information Center
Williams, Ron
1986-01-01
Compares Apple Macintosh and IBM PC microcomputers in terms of their usefulness in the laboratory. No attempt is made to equalize the two computer systems since they represent opposite ends of the computer spectrum. Indicates that the IBM PC is the most useful general-purpose personal computer for laboratory applications. (JN)
Voting with Their Seats: Computer Laboratory Design and the Casual User
ERIC Educational Resources Information Center
Spennemann, Dirk H. R.; Atkinson, John; Cornforth, David
2007-01-01
Student computer laboratories are provided by most teaching institutions around the world; however, what is the most effective layout for such facilities? The log-in data files from computer laboratories at a regional university in Australia were analysed to determine whether there was a pattern in student seating. In particular, it was…
ERIC Educational Resources Information Center
Newby, Michael; Marcoulides, Laura D.
2008-01-01
Purpose: The purpose of this paper is to model the relationship between student performance, student attitudes, and computer laboratory environments. Design/methodology/approach: Data were collected from 234 college students enrolled in courses that involved the use of a computer to solve problems and provided the laboratory experience by means of…
ERIC Educational Resources Information Center
Conlon, Michael P.; Mullins, Paul
2011-01-01
The Computer Science Department at Slippery Rock University created a laboratory for its Computer Networks and System Administration and Security courses under relaxed financial constraints. This paper describes the department's experience designing and using this laboratory, including lessons learned and descriptions of some student projects…
Lawrence Berkeley Laboratory Institutional Plan, FY 1993--1998
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chew, Joseph T.; Stroh, Suzanne C.; Maio, Linda R.
1992-10-01
The FY 1993--1998 Institutional Plan provides an overview of the Lawrence Berkeley Laboratory mission, strategic plan, scientific initiatives, research programs, environment and safety program plans, educational and technology transfer efforts, human resources, and facilities needs. The Strategic Plan section identifies long-range conditions that can influence the Laboratory, potential research trends, and several management implications. The Initiatives section identifies potential new research programs that represent major long-term opportunities for the Laboratory and the resources required for their implementation. The Scientific and Technical Programs section summarizes current programs and potential changes in research program activity. The Environment, Safety, and Health section describesmore » the management systems and programs underway at the Laboratory to protect the environment, the public, and the employees. The Technology Transfer and Education programs section describes current and planned programs to enhance the nation`s scientific literacy and human infrastructure and to improve economic competitiveness. The Human Resources section identifies LBL staff composition and development programs. The section on Site and Facilities discusses resources required to sustain and improve the physical plant and its equipment. The Resource Projections are estimates of required budgetary authority for the Laboratory`s ongoing research programs. The plan is an institutional management report for integration with the Department of Energy`s strategic planning activities that is developed through an annual planning process. The plan identifies technical and administrative directions in the context of the National Energy Strategy and the Department of Energy`s program planning initiatives. Preparation of the plan is coordinated by the Office for Planning and Development from information contributed by the Laboratory`s scientific and support divisions.« less
An imputed genotype resource for the laboratory mouse
Szatkiewicz, Jin P.; Beane, Glen L.; Ding, Yueming; Hutchins, Lucie; de Villena, Fernando Pardo-Manuel; Churchill, Gary A.
2009-01-01
We have created a high-density SNP resource encompassing 7.87 million polymorphic loci across 49 inbred mouse strains of the laboratory mouse by combining data available from public databases and training a hidden Markov model to impute missing genotypes in the combined data. The strong linkage disequilibrium found in dense sets of SNP markers in the laboratory mouse provides the basis for accurate imputation. Using genotypes from eight independent SNP resources, we empirically validated the quality of the imputed genotypes and demonstrate that they are highly reliable for most inbred strains. The imputed SNP resource will be useful for studies of natural variation and complex traits. It will facilitate association study designs by providing high density SNP genotypes for large numbers of mouse strains. We anticipate that this resource will continue to evolve as new genotype data become available for laboratory mouse strains. The data are available for bulk download or query at http://cgd.jax.org/. PMID:18301946
Adams, Martin; Stamper, Paul D.; Dasgupta, Debanjana; Hewson, Roger; Buck, Charles D.; Richards, Allen L.; Hay, John
2016-01-01
Strategic laboratory planning in limited resource areas is essential for addressing global health security issues. Establishing a national reference laboratory, especially one with BSL-3 or -4 biocontainment facilities, requires a heavy investment of resources, a multisectoral approach, and commitments from multiple stakeholders. We make the case for donor organizations and recipient partners to develop a comprehensive laboratory operations roadmap that addresses factors such as mission and roles, engaging national and political support, securing financial support, defining stakeholder involvement, fostering partnerships, and building trust. Successful development occurred with projects in African countries and in Azerbaijan, where strong leadership and a clear management framework have been key to success. A clearly identified and agreed management framework facilitate identifying the responsibility for developing laboratory capabilities and support services, including biosafety and biosecurity, quality assurance, equipment maintenance, supply chain establishment, staff certification and training, retention of human resources, and sustainable operating revenue. These capabilities and support services pose rate-limiting yet necessary challenges. Laboratory capabilities depend on mission and role, as determined by all stakeholders, and demonstrate the need for relevant metrics to monitor the success of the laboratory, including support for internal and external audits. Our analysis concludes that alternative frameworks for success exist for developing and implementing capabilities at regional and national levels in limited resource areas. Thus, achieving a balance for standardizing practices between local procedures and accepted international standards is a prerequisite for integrating new facilities into a country's existing public health infrastructure and into the overall international scientific community. PMID:27559843
CE-ACCE: The Cloud Enabled Advanced sCience Compute Environment
NASA Astrophysics Data System (ADS)
Cinquini, L.; Freeborn, D. J.; Hardman, S. H.; Wong, C.
2017-12-01
Traditionally, Earth Science data from NASA remote sensing instruments has been processed by building custom data processing pipelines (often based on a common workflow engine or framework) which are typically deployed and run on an internal cluster of computing resources. This approach has some intrinsic limitations: it requires each mission to develop and deploy a custom software package on top of the adopted framework; it makes use of dedicated hardware, network and storage resources, which must be specifically purchased, maintained and re-purposed at mission completion; and computing services cannot be scaled on demand beyond the capability of the available servers.More recently, the rise of Cloud computing, coupled with other advances in containerization technology (most prominently, Docker) and micro-services architecture, has enabled a new paradigm, whereby space mission data can be processed through standard system architectures, which can be seamlessly deployed and scaled on demand on either on-premise clusters, or commercial Cloud providers. In this talk, we will present one such architecture named CE-ACCE ("Cloud Enabled Advanced sCience Compute Environment"), which we have been developing at the NASA Jet Propulsion Laboratory over the past year. CE-ACCE is based on the Apache OODT ("Object Oriented Data Technology") suite of services for full data lifecycle management, which are turned into a composable array of Docker images, and complemented by a plug-in model for mission-specific customization. We have applied this infrastructure to both flying and upcoming NASA missions, such as ECOSTRESS and SMAP, and demonstrated deployment on the Amazon Cloud, either using simple EC2 instances, or advanced AWS services such as Amazon Lambda and ECS (EC2 Container Services).
Federation in genomics pipelines: techniques and challenges.
Chaterji, Somali; Koo, Jinkyu; Li, Ninghui; Meyer, Folker; Grama, Ananth; Bagchi, Saurabh
2017-08-29
Federation is a popular concept in building distributed cyberinfrastructures, whereby computational resources are provided by multiple organizations through a unified portal, decreasing the complexity of moving data back and forth among multiple organizations. Federation has been used in bioinformatics only to a limited extent, namely, federation of datastores, e.g. SBGrid Consortium for structural biology and Gene Expression Omnibus (GEO) for functional genomics. Here, we posit that it is important to federate both computational resources (CPU, GPU, FPGA, etc.) and datastores to support popular bioinformatics portals, with fast-increasing data volumes and increasing processing requirements. A prime example, and one that we discuss here, is in genomics and metagenomics. It is critical that the processing of the data be done without having to transport the data across large network distances. We exemplify our design and development through our experience with metagenomics-RAST (MG-RAST), the most popular metagenomics analysis pipeline. Currently, it is hosted completely at Argonne National Laboratory. However, through a recently started collaborative National Institutes of Health project, we are taking steps toward federating this infrastructure. Being a widely used resource, we have to move toward federation without disrupting 50 K annual users. In this article, we describe the computational tools that will be useful for federating a bioinformatics infrastructure and the open research challenges that we see in federating such infrastructures. It is hoped that our manuscript can serve to spur greater federation of bioinformatics infrastructures by showing the steps involved, and thus, allow them to scale to support larger user bases. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community
NASA Astrophysics Data System (ADS)
Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt
2014-05-01
Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to the entire UK environmental science community. Experience with the first phase demonstrated the range of user needs. A trade-off is needed between access privileges to resources, flexibility of use and security. This has influenced the form and types of service under development for the new phase. JASMIN will deploy a specialised private cloud organised into "Managed" and "Unmanaged" components. In the Managed Cloud, users have direct access to the storage and compute resources for optimal performance but for reasons of security, via a more restrictive PaaS (Platform-as-a-Service) interface. The Unmanaged Cloud is deployed in an isolated part of the network but co-located with the rest of the infrastructure. This enables greater liberty to tenants - full IaaS (Infrastructure-as-a-Service) capability to provision customised infrastructure - whilst at the same time protecting more sensitive parts of the system from direct access using these elevated privileges. The private cloud will be augmented with cloud-bursting capability so that it can exploit the resources available from public clouds, making it effectively a hybrid solution. A single interface will overlay the functionality of both the private cloud and external interfaces to public cloud providers giving users the flexibility to migrate resources between infrastructures as requirements dictate.
Laboratory for Energy-Related Health Research: Annual report, fiscal year 1987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abell, D.L.
1989-04-01
The laboratory's research objective is to provide new knowledge for an improved understanding of the potential bioenvironmental and occupational health problems associated with energy utilization. Our purpose is to contribute to the safe and healthful development of energy resources for the benefit of mankind. This research encompasses several areas of basic investigation that relate to toxicological and biomedical problems associated with potentially toxic chemical and radioactive substances and ionizing radiation, with particular emphasis on carcinogenicity. Studies of systemic injury and nuclear-medical diagnostic and therapeutic methods are also involved. This program is interdisciplinary; it involves physics, chemistry, environmental engineering, biophysics andmore » biochemistry, cellular and molecular biology, physiology, immunology, toxicology, both human and veterinary medicine, nuclear medicine, pathology, hematology, radiation biology, reproductive biology, oncology, biomathematics, and computer science. The principal themes of the research at LEHR center around the biology, radiobiology, and health status of the skeleton and its blood-forming constituents; the toxicology and properties of airborne materials; the beagle as an experimental animal model; carcinogenesis; and the scaling of the results from laboratory animal studies to man for appropriate assessment of risk.« less
Flexible services for the support of research.
Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John
2013-01-28
Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.
A Model for Designing Adaptive Laboratory Evolution Experiments.
LaCroix, Ryan A; Palsson, Bernhard O; Feist, Adam M
2017-04-15
The occurrence of mutations is a cornerstone of the evolutionary theory of adaptation, capitalizing on the rare chance that a mutation confers a fitness benefit. Natural selection is increasingly being leveraged in laboratory settings for industrial and basic science applications. Despite increasing deployment, there are no standardized procedures available for designing and performing adaptive laboratory evolution (ALE) experiments. Thus, there is a need to optimize the experimental design, specifically for determining when to consider an experiment complete and for balancing outcomes with available resources (i.e., laboratory supplies, personnel, and time). To design and to better understand ALE experiments, a simulator, ALEsim, was developed, validated, and applied to the optimization of ALE experiments. The effects of various passage sizes were experimentally determined and subsequently evaluated with ALEsim, to explain differences in experimental outcomes. Furthermore, a beneficial mutation rate of 10 -6.9 to 10 -8.4 mutations per cell division was derived. A retrospective analysis of ALE experiments revealed that passage sizes typically employed in serial passage batch culture ALE experiments led to inefficient production and fixation of beneficial mutations. ALEsim and the results described here will aid in the design of ALE experiments to fit the exact needs of a project while taking into account the resources required and will lower the barriers to entry for this experimental technique. IMPORTANCE ALE is a widely used scientific technique to increase scientific understanding, as well as to create industrially relevant organisms. The manner in which ALE experiments are conducted is highly manual and uniform, with little optimization for efficiency. Such inefficiencies result in suboptimal experiments that can take multiple months to complete. With the availability of automation and computer simulations, we can now perform these experiments in an optimized fashion and can design experiments to generate greater fitness in an accelerated time frame, thereby pushing the limits of what adaptive laboratory evolution can achieve. Copyright © 2017 American Society for Microbiology.
Advanced Optical Burst Switched Network Concepts
NASA Astrophysics Data System (ADS)
Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian
In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.
Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre
NASA Astrophysics Data System (ADS)
Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip
2013-04-01
Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in imperfect conditions. The forecasting model testing centre uses a repository to hold all the data and models and a catalogue to hold all the corresponding metadata. It allows to: Data transfer: Upload experimental data: We have developed FAST (Flexible Automated Streaming Transfer) tool to upload data from RP laboratories to the repository. FAST sets up data transfer requirements and selects automatically the transfer protocol. Metadata are automatically created and stored. Web data access: Create synthetic data: Users can choose a generator and supply parameters. Synthetic data are automatically stored with corresponding metadata. Select data and models: Search the metadata using criteria design for RP. The metadata of each data (synthetic or from laboratory) and models are well-described through their respective catalogues accessible by the web portal. Upload models: Upload and store a model with associated metadata. This provide an opportunity to share models. The web portal solicits and creates metadata describing each model. Run model and visualise results: Selected data and a model to be submitted to a High Performance Computational resource hiding technical details. Results are displayed in accelerated time and stored allowing retrieval, inspection and aggregation. The forecasting model testing centre proposed could be integrated into EPOS. Its expected benefits are: Improved the understanding of brittle failure prediction and its scalability to natural phenomena. Accelerated and extensive testing and rapid sharing of insights. Increased impact and visibility of RP and GeoScience research. Resources for education and training. A key challenge is to agree the framework for sharing RP data and models. Our work is provocative first step.
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149
Optimization of tomographic reconstruction workflows on geographically distributed resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Earth Resources Laboratory research and technology
NASA Technical Reports Server (NTRS)
1983-01-01
The accomplishments of the Earth Resources Laboratory's research and technology program are reported. Sensors and data systems, the AGRISTARS project, applied research and data analysis, joint research projects, test and evaluation studies, and space station support activities are addressed.
Issues in undergraduate education in computational science and high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchioro, T.L. II; Martin, D.
1994-12-31
The ever increasing need for mathematical and computational literacy within their society and among members of the work force has generated enormous pressure to revise and improve the teaching of related subjects throughout the curriculum, particularly at the undergraduate level. The Calculus Reform movement is perhaps the best known example of an organized initiative in this regard. The UCES (Undergraduate Computational Engineering and Science) project, an effort funded by the Department of Energy and administered through the Ames Laboratory, is sponsoring an informal and open discussion of the salient issues confronting efforts to improve and expand the teaching of computationalmore » science as a problem oriented, interdisciplinary approach to scientific investigation. Although the format is open, the authors hope to consider pertinent questions such as: (1) How can faculty and research scientists obtain the recognition necessary to further excellence in teaching the mathematical and computational sciences? (2) What sort of educational resources--both hardware and software--are needed to teach computational science at the undergraduate level? Are traditional procedural languages sufficient? Are PCs enough? Are massively parallel platforms needed? (3) How can electronic educational materials be distributed in an efficient way? Can they be made interactive in nature? How should such materials be tied to the World Wide Web and the growing ``Information Superhighway``?« less
CUBE (Computer Use By Engineers) symposium abstracts. [LASL, October 4--6, 1978
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruminer, J.J.
1978-07-01
This report presents the abstracts for the CUBE (Computer Use by Engineers) Symposium, October 4, through 6, 1978. Contributors are from Lawrence Livermore Laboratory, Los Alamos Scientific Laboratory, and Sandia Laboratories.
Enabling opportunistic resources for CMS Computing Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hufnagel, Dirk
With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less
Enabling opportunistic resources for CMS Computing Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hufnagel, Dick
With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are usedmore » to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less
Enabling opportunistic resources for CMS Computing Operations
Hufnagel, Dirk
2015-12-23
With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less
Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sulakhe, D.; Rodriguez, A.; Wilde, M.
2008-03-01
Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less
Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee
NASA Technical Reports Server (NTRS)
Gallagher, D. L. (Editor)
1993-01-01
The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.
Relating Solar Resource Variability to Cloud Type
NASA Astrophysics Data System (ADS)
Hinkelman, L. M.; Sengupta, M.
2012-12-01
Power production from renewable energy (RE) resources is rapidly increasing. Generation of renewable energy is quite variable since the solar and wind resources that form the inputs are, themselves, inherently variable. There is thus a need to understand the impact of renewable generation on the transmission grid. Such studies require estimates of high temporal and spatial resolution power output under various scenarios, which can be created from corresponding solar resource data. Satellite-based solar resource estimates are the best source of long-term solar irradiance data for the typically large areas covered by transmission studies. As satellite-based resource datasets are generally available at lower temporal and spatial resolution than required, there is, in turn, a need to downscale these resource data. Downscaling in both space and time requires information about solar irradiance variability, which is primarily a function of cloud types and properties. In this study, we analyze the relationship between solar resource variability and satellite-based cloud properties. One-minute resolution surface irradiance data were obtained from a number of stations operated by the National Oceanic and Atmospheric Administration (NOAA) under the Surface Radiation (SURFRAD) and Integrated Surface Irradiance Study (ISIS) networks as well as from NREL's Solar Radiation Research Laboratory (SRRL) in Golden, Colorado. Individual sites were selected so that a range of meteorological conditions would be represented. Cloud information at a nominal 4 km resolution and half hour intervals was derived from NOAA's Geostationary Operation Environmental Satellite (GOES) series of satellites. Cloud class information from the GOES data set was then used to select and composite irradiance data from the measurement sites. The irradiance variability for each cloud classification was characterized using general statistics of the fluxes themselves and their variability in time, as represented by ramps computed for time scales from 10 s to 0.5 hr. The statistical relationships derived using this method will be presented, comparing and contrasting the statistics computed for the different cloud types. The implications for downscaling irradiances from satellites or forecast models will also be discussed.
Real-Time, Sensor-Based Computing in the Laboratory.
ERIC Educational Resources Information Center
Badmus, O. O.; And Others
1996-01-01
Demonstrates the importance of Real-Time, Sensor-Based (RTSB) computing and how it can be easily and effectively integrated into university student laboratories. Describes the experimental processes, the process instrumentation and process-computer interface, the computer and communications systems, and typical software. Provides much technical…
Using Mosix for Wide-Area Compuational Resources
Maddox, Brian G.
2004-01-01
One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.
Prediction of Wind Energy Resources (PoWER) Users Guide
2016-01-01
ARL-TR-7573● JAN 2016 US Army Research Laboratory Prediction of Wind Energy Resources (PoWER) User’s Guide by David P Sauter...not return it to the originator. ARL-TR-7573 ● JAN 2016 US Army Research Laboratory Prediction of Wind Energy Resources (PoWER...2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) 09/2015–11/2015 4. TITLE AND SUBTITLE Prediction of Wind Energy Resources (PoWER) User’s
Contextuality as a Resource for Models of Quantum Computation with Qubits
NASA Astrophysics Data System (ADS)
Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert
2017-09-01
A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.
Computing arrival times of firefighting resources for initial attack
Romain M. Mees
1978-01-01
Dispatching of firefighting resources requires instantaneous or precalculated decisions. A FORTRAN computer program has been developed that can provide a list of resources in order of computed arrival time for initial attack on a fire. The program requires an accurate description of the existing road system and a list of all resources available on a planning unit....
NASA Astrophysics Data System (ADS)
Stevens, Rick
2008-07-01
The fourth annual Scientific Discovery through Advanced Computing (SciDAC) Conference was held June 13-18, 2008, in Seattle, Washington. The SciDAC conference series is the premier communitywide venue for presentation of results from the DOE Office of Science's interdisciplinary computational science program. Started in 2001 and renewed in 2006, the DOE SciDAC program is the country's - and arguably the world's - most significant interdisciplinary research program supporting the development of advanced scientific computing methods and their application to fundamental and applied areas of science. SciDAC supports computational science across many disciplines, including astrophysics, biology, chemistry, fusion sciences, and nuclear physics. Moreover, the program actively encourages the creation of long-term partnerships among scientists focused on challenging problems and computer scientists and applied mathematicians developing the technology and tools needed to address those problems. The SciDAC program has played an increasingly important role in scientific research by allowing scientists to create more accurate models of complex processes, simulate problems once thought to be impossible, and analyze the growing amount of data generated by experiments. To help further the research community's ability to tap into the capabilities of current and future supercomputers, Under Secretary for Science, Raymond Orbach, launched the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program in 2003. The INCITE program was conceived specifically to seek out computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. The program encourages proposals from universities, other research institutions, and industry. During the first two years of the INCITE program, 10 percent of the resources at NERSC were allocated to INCITE awardees. However, demand for supercomputing resources far exceeded available systems; and in 2003, the Office of Science identified increasing computing capability by a factor of 100 as the second priority on its Facilities of the Future list. The goal was to establish leadership-class computing resources to support open science. As a result of a peer reviewed competition, the first leadership computing facility was established at Oak Ridge National Laboratory in 2004. A second leadership computing facility was established at Argonne National Laboratory in 2006. This expansion of computational resources led to a corresponding expansion of the INCITE program. In 2008, Argonne, Lawrence Berkeley, Oak Ridge, and Pacific Northwest national laboratories all provided resources for INCITE. By awarding large blocks of computer time on the DOE leadership computing facilities, the INCITE program enables the largest-scale computations to be pursued. In 2009, INCITE will award over half a billion node-hours of time. The SciDAC conference celebrates progress in advancing science through large-scale modeling and simulation. Over 350 participants attended this year's talks, poster sessions, and tutorials, spanning the disciplines supported by DOE. While the principal focus was on SciDAC accomplishments, this year's conference also included invited presentations and posters from DOE INCITE awardees. Another new feature in the SciDAC conference series was an electronic theater and video poster session, which provided an opportunity for the community to see over 50 scientific visualizations in a venue equipped with many high-resolution large-format displays. To highlight the growing international interest in petascale computing, this year's SciDAC conference included a keynote presentation by Herman Lederer from the Max Planck Institut, one of the leaders of DEISA (Distributed European Infrastructure for Supercomputing Applications) project and a member of the PRACE consortium, Europe's main petascale project. We also heard excellent talks from several European groups, including Laurent Gicquel of CERFACS, who spoke on `Large-Eddy Simulations of Turbulent Reacting Flows of Real Burners: Status and Challenges', and Jean-Francois Hamelin from EDF, who presented a talk on `Getting Ready for Petaflop Capacities and Beyond: A Utility Perspective'. Two other compelling addresses gave attendees a glimpse into the future. Tomas Diaz de la Rubia of Lawrence Livermore National Laboratory spoke on a vision for a fusion/fission hybrid reactor known as the `LIFE Engine' and discussed some of the materials and modeling challenges that need to be overcome to realize the vision for a 1000-year greenhouse-gas-free power source. Dan Reed from Microsoft gave a capstone talk on the convergence of technology, architecture, and infrastructure for cloud computing, data-intensive computing, and exascale computing (1018 flops/sec). High-performance computing is making rapid strides. The SciDAC community's computational resources are expanding dramatically. In the summer of 2008 the first general purpose petascale system (IBM Cell-based RoadRunner at Los Alamos National Laboratory) was recognized in the top 500 list of fastest machines heralding in the dawning of the petascale era. The DOE's leadership computing facility at Argonne reached number three on the Top 500 and is at the moment the most capable open science machine based on an IBM BG/P system with a peak performance of over 550 teraflops/sec. Later this year Oak Ridge is expected to deploy a 1 petaflops/sec Cray XT system. And even before the scientific community has had an opportunity to make significant use of petascale systems, the computer science research community is forging ahead with ideas and strategies for development of systems that may by the end of the next decade sustain exascale performance. Several talks addressed barriers to, and strategies for, achieving exascale capabilities. The last day of the conference was devoted to tutorials hosted by Microsoft Research at a new conference facility in Redmond, Washington. Over 90 people attended the tutorials, which covered topics ranging from an introduction to BG/P programming to advanced numerical libraries. The SciDAC and INCITE programs and the DOE Office of Advanced Scientific Computing Research core program investments in applied mathematics, computer science, and computational and networking facilities provide a nearly optimum framework for advancing computational science for DOE's Office of Science. At a broader level this framework also is benefiting the entire American scientific enterprise. As we look forward, it is clear that computational approaches will play an increasingly significant role in addressing challenging problems in basic science, energy, and environmental research. It takes many people to organize and support the SciDAC conference, and I would like to thank as many of them as possible. The backbone of the conference is the technical program; and the task of selecting, vetting, and recruiting speakers is the job of the organizing committee. I thank the members of this committee for all the hard work and the many tens of conference calls that enabled a wonderful program to be assembled. This year the following people served on the organizing committee: Jim Ahrens, LANL; David Bader, LLNL; Bryan Barnett, Microsoft; Peter Beckman, ANL; Vincent Chan, GA; Jackie Chen, SNL; Lori Diachin, LLNL; Dan Fay, Microsoft; Ian Foster, ANL; Mark Gordon, Ames; Mohammad Khaleel, PNNL; David Keyes, Columbia University; Bob Lucas, University of Southern California; Tony Mezzacappa, ORNL; Jeff Nichols, ORNL; David Nowak, ANL; Michael Papka, ANL; Thomas Schultess, ORNL; Horst Simon, LBNL; David Skinner, LBNL; Panagiotis Spentzouris, Fermilab; Bob Sugar, UCSB; and Kathy Yelick, LBNL. I owe a special thanks to Mike Papka and Jim Ahrens for handling the electronic theater. I also thank all those who submitted videos. It was a highly successful experiment. Behind the scenes an enormous amount of work is required to make a large conference go smoothly. First I thank Cheryl Zidel for her tireless efforts as organizing committee liaison and posters chair and, in general, handling all of my end of the program and keeping me calm. I also thank Gail Pieper for her work in editing the proceedings, Beth Cerny Patino for her work on the Organizing Committee website and electronic theater, and Ken Raffenetti for his work in keeping that website working. Jon Bashor and John Hules did an excellent job in handling conference communications. I thank Caitlin Youngquist for the striking graphic design; Dan Fay for tutorials arrangements; and Lynn Dory, Suzanne Stevenson, Sarah Pebelske and Sarah Zidel for on-site registration and conference support. We all owe Yeen Mankin an extra-special thanks for choosing the hotel, handling contracts, arranging menus, securing venues, and reassuring the chair that everything was under control. We are pleased to have obtained corporate sponsorship from Cray, IBM, Intel, HP, and SiCortex. I thank all the speakers and panel presenters. I also thank the former conference chairs Tony Metzzacappa, Bill Tang, and David Keyes, who were never far away for advice and encouragement. Finally, I offer my thanks to Michael Strayer, without whose leadership, vision, and persistence the SciDAC program would not have come into being and flourished. I am honored to be part of his program and his friend. Rick Stevens Seattle, Washington July 18, 2008
ERIC Educational Resources Information Center
Batt, Russell H., Ed.
1990-01-01
Four applications of microcomputers in the chemical laboratory are presented. Included are "Mass Spectrometer Interface with an Apple II Computer,""Interfacing the Spectronic 20 to a Computer,""A pH-Monitoring and Control System for Teaching Laboratories," and "A Computer-Aided Optical Melting Point Device." Software, instrumentation, and uses are…
Modeling the Virtual Machine Launching Overhead under Fermicloud
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele; Wu, Hao; Ren, Shangping
FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resourcemore » (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.« less
Lawrence Berkeley Laboratory Institutional Plan, FY 1993--1998
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-10-01
The FY 1993--1998 Institutional Plan provides an overview of the Lawrence Berkeley Laboratory mission, strategic plan, scientific initiatives, research programs, environment and safety program plans, educational and technology transfer efforts, human resources, and facilities needs. The Strategic Plan section identifies long-range conditions that can influence the Laboratory, potential research trends, and several management implications. The Initiatives section identifies potential new research programs that represent major long-term opportunities for the Laboratory and the resources required for their implementation. The Scientific and Technical Programs section summarizes current programs and potential changes in research program activity. The Environment, Safety, and Health section describesmore » the management systems and programs underway at the Laboratory to protect the environment, the public, and the employees. The Technology Transfer and Education programs section describes current and planned programs to enhance the nation's scientific literacy and human infrastructure and to improve economic competitiveness. The Human Resources section identifies LBL staff composition and development programs. The section on Site and Facilities discusses resources required to sustain and improve the physical plant and its equipment. The Resource Projections are estimates of required budgetary authority for the Laboratory's ongoing research programs. The plan is an institutional management report for integration with the Department of Energy's strategic planning activities that is developed through an annual planning process. The plan identifies technical and administrative directions in the context of the National Energy Strategy and the Department of Energy's program planning initiatives. Preparation of the plan is coordinated by the Office for Planning and Development from information contributed by the Laboratory's scientific and support divisions.« less
ERIC Educational Resources Information Center
Falkner, Katrina; Vivian, Rebecca
2015-01-01
To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…
Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Stocker, John C.; Golomb, Andrew M.
2011-01-01
Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Lonnie Moore, the Computer Security Manager, CSSM/CPPM at Lawrence Livermore National Laboratory (LLNL) and Gale Warshawsky, the Coordinator for Computer Security Education & Awareness at LLNL, wanted to share topics such as computer ethics, software piracy, privacy issues, and protecting information in a format that would capture and hold an audience`s attention. Four Computer Security Short Subject videos were produced which ranged from 1-3 minutes each. These videos are very effective education and awareness tools that can be used to generate discussions about computer security concerns and good computing practices. Leaders may incorporate the Short Subjects into presentations. After talkingmore » about a subject area, one of the Short Subjects may be shown to highlight that subject matter. Another method for sharing them could be to show a Short Subject first and then lead a discussion about its topic. The cast of characters and a bit of information about their personalities in the LLNL Computer Security Short Subjects is included in this report.« less
Science & Technology Review June 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poyneer, L A
2012-04-20
This month's issue has the following articles: (1) A New Era in Climate System Analysis - Commentary by William H. Goldstein; (2) Seeking Clues to Climate Change - By comparing past climate records with results from computer simulations, Livermore scientists can better understand why Earth's climate has changed and how it might change in the future; (3) Finding and Fixing a Supercomputer's Faults - Livermore experts have developed innovative methods to detect hardware faults in supercomputers and help applications recover from errors that do occur; (4) Targeting Ignition - Enhancements to the cryogenic targets for National Ignition Facility experiments aremore » furthering work to achieve fusion ignition with energy gain; (5) Neural Implants Come of Age - A new generation of fully implantable, biocompatible neural prosthetics offers hope to patients with neurological impairment; and (6) Incubator Busy Growing Energy Technologies - Six collaborations with industrial partners are using the Laboratory's high-performance computing resources to find solutions to urgent energy-related problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, D.; Yoshimura, A.; Butler, D.
This report describes the results of a Cooperative Research and Development Agreement between Sandia National Laboratories and Kaiser Permanente Southern California to develop a prototype computer model of Kaiser Permanente`s health care delivery system. As a discrete event simulation, SimHCO models for each of 100,000 patients the progression of disease, individual resource usage, and patient choices in a competitive environment. SimHCO is implemented in the object-oriented programming language C{sup 2}, stressing reusable knowledge and reusable software components. The versioned implementation of SimHCO showed that the object-oriented framework allows the program to grow in complexity in an incremental way. Furthermore, timingmore » calculations showed that SimHCO runs in a reasonable time on typical workstations, and that a second phase model will scale proportionally and run within the system constraints of contemporary computer technology.« less
Wake characteristics of wind turbines in utility-scale wind farms
NASA Astrophysics Data System (ADS)
Yang, Xiaolei; Foti, Daniel; Sotiropoulos, Fotis
2017-11-01
The dynamics of turbine wakes is affected by turbine operating conditions, ambient atmospheric turbulent flows, and wakes from upwind turbines. Investigations of the wake from a single turbine have been extensively carried out in the literature. Studies on the wake dynamics in utility-scale wind farms are relatively limited. In this work, we employ large-eddy simulation with an actuator surface or actuator line model for turbine blades to investigate the wake dynamics in utility-scale wind farms. Simulations of three wind farms, i.e., the Horns Rev wind farm in Denmark, Pleasant Valley wind farm in Minnesota, and the Vantage wind farm in Washington are carried out. The computed power shows a good agreement with measurements. Analysis of the wake dynamics in the three wind farms is underway and will be presented in the conference. This work was support by Xcel Energy (RD4-13). The computational resources were provided by National Renewable Energy Laboratory.
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Scalar transport across the turbulent/non-turbulent interface in jets: Schmidt number effects
NASA Astrophysics Data System (ADS)
Silva, Tiago S.; B. da Silva, Carlos; Idmec Team
2016-11-01
The dynamics of a passive scalar field near a turbulent/non-turbulent interface (TNTI) is analysed through direct numerical simulations (DNS) of turbulent planar jets, with Reynolds numbers ranging from 142 <= Reλ <= 246 , and Schmidt numbers from 0 . 07 <= Sc <= 7 . The steepness of the scalar gradient, as observed from conditional profiles near the TNTI, increases with the Schmidt number. Conditional scalar gradient budgets show that for low and moderate Schmidt numbers a diffusive superlayer emerges at the TNTI, where the scalar gradient diffusion dominates, while the production is negligible. For low Schmidt numbers the growth of the turbulent front is commanded by the molecular diffusion, whereas the scalar gradient convection is negligible. The authors acknowledge the Laboratory for Advanced Computing at University of Coimbra for providing HPC, computing, consulting resources that have contributed to the research results reported within this paper. URL http://www.lca.uc.pt.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.; Stiles, J. A.; Frost, F. S.; Shanmugam, K. S.; Smith, S. A.; Narayanan, V.; Holtzman, J. C. (Principal Investigator)
1982-01-01
Computer-generated radar simulations and mathematical geologic terrain models were used to establish the optimum radar sensor operating parameters for geologic research. An initial set of mathematical geologic terrain models was created for three basic landforms and families of simulated radar images were prepared from these models for numerous interacting sensor, platform, and terrain variables. The tradeoffs between the various sensor parameters and the quantity and quality of the extractable geologic data were investigated as well as the development of automated techniques of digital SAR image analysis. Initial work on a texture analysis of SEASAT SAR imagery is reported. Computer-generated radar simulations are shown for combinations of two geologic models and three SAR angles of incidence.
Building laboratory capacity to support HIV care in Nigeria: Harvard/APIN PEPFAR, 2004-2012.
Hamel, Donald J; Sankalé, Jean-Louis; Samuels, Jay Osi; Sarr, Abdoulaye D; Chaplin, Beth; Ofuche, Eke; Meloni, Seema T; Okonkwo, Prosper; Kanki, Phyllis J
From 2004-2012, the Harvard/AIDS Prevention Initiative in Nigeria, funded through the US President's Emergency Plan for AIDS Relief programme, scaled up HIV care and treatment services in Nigeria. We describe the methodologies and collaborative processes developed to improve laboratory capacity significantly in a resource-limited setting. These methods were implemented at 35 clinic and laboratory locations. Systems were established and modified to optimise numerous laboratory processes. These included strategies for clinic selection and management, equipment and reagent procurement, supply chains, laboratory renovations, equipment maintenance, electronic data management, quality development programmes and trainings. Over the eight-year programme, laboratories supported 160 000 patients receiving HIV care in Nigeria, delivering over 2.5 million test results, including regular viral load quantitation. External quality assurance systems were established for CD4+ cell count enumeration, blood chemistries and viral load monitoring. Laboratory equipment platforms were improved and standardised and use of point-of-care analysers was expanded. Laboratory training workshops supported laboratories toward increasing staff skills and improving overall quality. Participation in a World Health Organisation-led African laboratory quality improvement system resulted in significant gains in quality measures at five laboratories. Targeted implementation of laboratory development processes, during simultaneous scale-up of HIV treatment programmes in a resource-limited setting, can elicit meaningful gains in laboratory quality and capacity. Systems to improve the physical laboratory environment, develop laboratory staff, create improvements to reduce costs and increase quality are available for future health and laboratory strengthening programmes. We hope that the strategies employed may inform and encourage the development of other laboratories in resource-limited settings.
Circumscribing Circumscription. A Guide to Relevance and Incompleteness,
1985-10-01
other rules of conjecture, to account for resource limitations. P "- h’ MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I. Memo...of conjecture, to account for resource limitations. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts...Institute of Technology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Robert K.
Ernest Orland Lawrence Berkeley National Laboratory (Berkeley Lab) is the oldest of America's national laboratories and has been a leader in science and engineering technology for more than 65 years, serving as a powerful resource to meet Us national needs. As a multi-program Department of Energy laboratory, Berkeley Lab is dedicated to performing leading edge research in the biological, physical, materials, chemical, energy, environmental and computing sciences. Ernest Orlando Lawrence, the Lab's founder and the first of its nine Nobel prize winners, invented the cyclotron, which led to a Golden Age of particle physics and revolutionary discoveries about the naturemore » of the universe. To this day, the Lab remains a world center for accelerator and detector innovation and design. The Lab is the birthplace of nuclear medicine and the cradle of invention for medical imaging. In the field of heart disease, Lab researchers were the first to isolate lipoproteins and the first to determine that the ratio of high density to low density lipoproteins is a strong indicator of heart disease risk. The demise of the dinosaurs--the revelation that they had been killed off by a massive comet or asteroid that had slammed into the Earth--was a theory developed here. The invention of the chemical laser, the unlocking of the secrets of photosynthesis--this is a short preview of the legacy of this Laboratory.« less
1999-11-10
Space Vacuum Epitaxy Center works with industry and government laboratories to develop advanced thin film materials and devices by utilizing the most abundant free resource in orbit: the vacuum of space. SVEC, along with its affiliates, is developing semiconductor mid-IR lasers for environmental sensing and defense applications, high efficiency solar cells for space satellite applications, oxide thin films for computer memory applications, and ultra-hard thin film coatings for wear resistance in micro devices. Performance of these vacuum deposited thin film materials and devices can be enhanced by using the ultra-vacuum of space for which SVEC has developed the Wake Shield Facility---a free flying research platform dedicated to thin film materials development in space.
2000-11-10
Space Vacuum Epitaxy Center works with industry and government laboratories to develop advanced thin film materials and devices by utilizing the most abundant free resource in orbit: the vacuum of space. SVEC, along with its affiliates, is developing semiconductor mid-IR lasers for environmental sensing and defense applications, high efficiency solar cells for space satellite applications, oxide thin films for computer memory applications, and ultra-hard thin film coatings for wear resistance in micro devices. Performance of these vacuum deposited thin film materials and devices can be enhanced by using the ultra-vacuum of space for which SVEC has developed the Wake Shield Facility---a free flying research platform dedicated to thin film materials development in space.
Applications of digital image processing techniques to problems of data registration and correlation
NASA Technical Reports Server (NTRS)
Green, W. B.
1978-01-01
An overview is presented of the evolution of the computer configuration at JPL's Image Processing Laboratory (IPL). The development of techniques for the geometric transformation of digital imagery is discussed and consideration is given to automated and semiautomated image registration, and the registration of imaging and nonimaging data. The increasing complexity of image processing tasks at IPL is illustrated with examples of various applications from the planetary program and earth resources activities. It is noted that the registration of existing geocoded data bases with Landsat imagery will continue to be important if the Landsat data is to be of genuine use to the user community.
Operating Dedicated Data Centers - Is It Cost-Effective?
NASA Astrophysics Data System (ADS)
Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.
2014-06-01
The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.
Benn, Peter A; Makowski, Gregory S; Egan, James F X; Wright, Dave
2006-11-01
Analytical error affects 2nd-trimester maternal serum screening for Down syndrome risk estimation. We analyzed the between-laboratory reproducibility of risk estimates from 2 laboratories. Laboratory 1 used Bayer ACS180 immunoassays for alpha-fetoprotein (AFP) and human chorionic gonadotropin (hCG), Diagnostic Systems Laboratories (DSL) RIA for unconjugated estriol (uE3), and DSL enzyme immunoassay for inhibin-A (INH-A). Laboratory 2 used Beckman immunoassays for AFP, hCG, and uE3, and DSL enzyme immunoassay for INH-A. Analyte medians were separately established for each laboratory. We used the same computational algorithm for all risk calculations, and we used Monte Carlo methods for computer modeling. For 462 samples tested, risk figures from the 2 laboratories differed >2-fold for 44.7%, >5-fold for 7.1%, and >10-fold for 1.7%. Between-laboratory differences in analytes were greatest for uE3 and INH-A. The screen-positive rates were 9.3% for laboratory 1 and 11.5% for laboratory 2, with a significant difference in the patients identified as screen-positive vs screen-negative (McNemar test, P<0.001). Computer modeling confirmed the large between-laboratory risk differences. Differences in performance of assays and laboratory procedures can have a large effect on patient-specific risks. Screening laboratories should minimize test imprecision and ensure that each assay performs in a manner similar to that assumed in the risk computational algorithm.
Computing the Envelope for Stepwise-Constant Resource Allocations
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Clancy, Daniel (Technical Monitor)
2002-01-01
Computing tight resource-level bounds is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with nodes equal to the events and edges equal to the necessary predecessor links between events. A staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. Each stage has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible and promising for use in the inner loop of flexible-time scheduling algorithms.
Tri-Laboratory Linux Capacity Cluster 2007 SOW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seager, M
2007-03-22
The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vastmore » number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.« less
The Workstation Approach to Laboratory Computing
Crosby, P.A.; Malachowski, G.C.; Hall, B.R.; Stevens, V.; Gunn, B.J.; Hudson, S.; Schlosser, D.
1985-01-01
There is a need for a Laboratory Workstation which specifically addresses the problems associated with computing in the scientific laboratory. A workstation based on the IBM PC architecture and including a front end data acquisition system which communicates with a host computer via a high speed communications link; a new graphics display controller with hardware window management and window scrolling; and an integrated software package is described.
Maokola, W; Willey, B A; Shirima, K; Chemba, M; Armstrong Schellenberg, J R M; Mshinda, H; Alonso, P; Tanner, M; Schellenberg, D
2011-06-01
To describe and evaluate the use of handheld computers for the management of Health Management Information System data. Electronic data capture took place in 11 sentinel health centres in rural southern Tanzania. Information from children attending the outpatient department (OPD) and the Expanded Program on Immunization vaccination clinic was captured by trained local school-leavers, supported by monthly supervision visits. Clinical data included malaria blood slides and haemoglobin colour scale results. Quality of captured data was assessed using double data entry. Malaria blood slide results from health centre laboratories were compared to those from the study's quality control laboratory. The system took 5 months to implement, and few staffings or logistical problems were encountered. Over the following 12 months (April 2006-March 2007), 7056 attendances were recorded in 9880 infants aged 2-11 months, 50% with clinical malaria. Monthly supervision visits highlighted incomplete recording of information between OPD and laboratory records, where on average 40% of laboratory visits were missing the record of their corresponding OPD visit. Quality of microscopy from health facility laboratories was lower overall than that from the quality assurance laboratory. Electronic capture of HMIS data was rapidly and successfully implemented in this resource-poor setting. Electronic capture alone did not resolve issues of data completeness, accuracy and reliability, which are essential for management, monitoring and evaluation; suggestions to monitor and improve data quality are made. © 2011 Blackwell Publishing Ltd.
Distributed Accounting on the Grid
NASA Technical Reports Server (NTRS)
Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.
2001-01-01
By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.
Collaborative workbench for cyberinfrastructure to accelerate science algorithm development
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Maskey, M.; Kuo, K.; Lynnes, C.
2013-12-01
There are significant untapped resources for information and knowledge creation within the Earth Science community in the form of data, algorithms, services, analysis workflows or scripts, and the related knowledge about these resources. Despite the huge growth in social networking and collaboration platforms, these resources often reside on an investigator's workstation or laboratory and are rarely shared. A major reason for this is that there are very few scientific collaboration platforms, and those that exist typically require the use of a new set of analysis tools and paradigms to leverage the shared infrastructure. As a result, adoption of these collaborative platforms for science research is inhibited by the high cost to an individual scientist of switching from his or her own familiar environment and set of tools to a new environment and tool set. This presentation will describe an ongoing project developing an Earth Science Collaborative Workbench (CWB). The CWB approach will eliminate this barrier by augmenting a scientist's current research environment and tool set to allow him or her to easily share diverse data and algorithms. The CWB will leverage evolving technologies such as commodity computing and social networking to design an architecture for scalable collaboration that will support the emerging vision of an Earth Science Collaboratory. The CWB is being implemented on the robust and open source Eclipse framework and will be compatible with widely used scientific analysis tools such as IDL. The myScience Catalog built into CWB will capture and track metadata and provenance about data and algorithms for the researchers in a non-intrusive manner with minimal overhead. Seamless interfaces to multiple Cloud services will support sharing algorithms, data, and analysis results, as well as access to storage and computer resources. A Community Catalog will track the use of shared science artifacts and manage collaborations among researchers.
1991-06-01
Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent
The Petascale Data Storage Institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, Garth; Long, Darrell; Honeyman, Peter
2013-07-01
Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.
Richardson, D
1997-12-01
This study compared student perceptions and learning outcomes of computer-assisted instruction against those of traditional didactic lectures. Components of Quantitative Circulatory Physiology (Biological Simulators) and Mechanical Properties of Active Muscle (Trinity Software) were used to teach regulation of tissue blood flow and muscle mechanics, respectively, in the course Medical Physiology. These topics were each taught, in part, by 1) standard didactic lectures, 2) computer-assisted lectures, and 3) computer laboratory assignment. Subjective evaluation was derived from a questionnaire assessing student opinions of the effectiveness of each method. Objective evaluation consisted of comparing scores on examination questions generated from each method. On a 1-10 scale, effectiveness ratings were higher (P < 0.0001) for the didactic lectures (7.7) compared with either computer-assisted lecture (3.8) or computer laboratory (4.2) methods. A follow-up discussion with representatives from the class indicated that students did not perceive computer instruction as being time effective. However, examination scores from computer laboratory questions (94.3%) were significantly higher compared with ones from either computer-assisted (89.9%; P < 0.025) or didactic (86.6%; P < 0.001) lectures. Thus computer laboratory instruction enhanced learning outcomes in medical physiology despite student perceptions to the contrary.
Cloud computing: a new business paradigm for biomedical information sharing.
Rosenthal, Arnon; Mork, Peter; Li, Maya Hao; Stanford, Jean; Koester, David; Reynolds, Patti
2010-04-01
We examine how the biomedical informatics (BMI) community, especially consortia that share data and applications, can take advantage of a new resource called "cloud computing". Clouds generally offer resources on demand. In most clouds, charges are pay per use, based on large farms of inexpensive, dedicated servers, sometimes supporting parallel computing. Substantial economies of scale potentially yield costs much lower than dedicated laboratory systems or even institutional data centers. Overall, even with conservative assumptions, for applications that are not I/O intensive and do not demand a fully mature environment, the numbers suggested that clouds can sometimes provide major improvements, and should be seriously considered for BMI. Methodologically, it was very advantageous to formulate analyses in terms of component technologies; focusing on these specifics enabled us to bypass the cacophony of alternative definitions (e.g., exactly what does a cloud include) and to analyze alternatives that employ some of the component technologies (e.g., an institution's data center). Relative analyses were another great simplifier. Rather than listing the absolute strengths and weaknesses of cloud-based systems (e.g., for security or data preservation), we focus on the changes from a particular starting point, e.g., individual lab systems. We often find a rough parity (in principle), but one needs to examine individual acquisitions--is a loosely managed lab moving to a well managed cloud, or a tightly managed hospital data center moving to a poorly safeguarded cloud? 2009 Elsevier Inc. All rights reserved.
Experience in using commercial clouds in CMS
NASA Astrophysics Data System (ADS)
Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration
2017-10-01
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
Experience in using commercial clouds in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, L.; Bockelman, B.; Dykstra, D.
Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less
Wells, I G; Cartwright, R Y; Farnan, L P
1993-12-15
The computing strategy in our laboratories evolved from research in Artificial Intelligence, and is based on powerful software tools running on high performance desktop computers with a graphical user interface. This allows most tasks to be regarded as design problems rather than implementation projects, and both rapid prototyping and an object-oriented approach to be employed during the in-house development and enhancement of the laboratory information systems. The practical application of this strategy is discussed, with particular reference to the system designer, the laboratory user and the laboratory customer. Routine operation covers five departments, and the systems are stable, flexible and well accepted by the users. Client-server computing, currently undergoing final trials, is seen as the key to further development, and this approach to Pathology computing has considerable potential for the future.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-30
... in physics, chemistry, mathematics, computer science, or engineering. Institutions should have a 4..., mathematics, computer science, or engineering with work experiences in laboratories or other settings...-0141-01] Professional Research Experience Program in Chemical Science and Technology Laboratory...
The Role of Hands-On Science Labs in Engaging the Next Generation of Space Explorers
NASA Astrophysics Data System (ADS)
Williams, Teresa A. J.
2002-01-01
Each country participating on the International Space Station (ISS) recognizes the importance of educating the coming generation about space and its opportunities. In 2001 the St. James School in downtown Houston, Texas was approached with a proposal to renovate an unused classroom and become involved with the "GLOBE" Program and other Internet based international learning resources. This inner-city school willingly agreed to the program based on "hands-on" learning. One month after room conversion and ten computer terminals donated by area businesses connectivity established to the internet the students immediately began using the "Global Learning and Observations to Benefit the Environment (GLOBE)" program and the International Space Station (ISS) Program educational resources. The "GLOBE" program involves numerous scientific and technical agencies studying the Earth, who make it their goal to provide educational resources to an international community of K-12 scientist. This project was conceived as a successor to the "Interactive Elementary Space Museum for the New Millennium" a space museum in a school corridor without the same type of budget. The laboratory is a collaboration, which involved area businesses, volunteers from the NASA/Johnson Space Center ISS Outreach Program, and students. This paper will outline planning and operation of the school science laboratory project from the point of view of the schools interest and involvement and assess its success to date. It will consider the lessons learned by the participating school administrations in the management of the process and discuss some of the issues that can both promote and discourage school participation in such projects.
Computer Exercises in Systems and Fields Experiments
ERIC Educational Resources Information Center
Bacon, C. M.; McDougal, J. R.
1971-01-01
Laboratory activities give students an opportunity to interact with computers in modes ranging from remote terminal use in laboratory experimentation to the direct hands-on use of a small digital computer with disk memory and on-line plotter, and finally to the use of a large computer under closed-shop operation. (Author/TS)
NASA Technical Reports Server (NTRS)
Cohen, Jarrett
1999-01-01
Parallel computers built out of mass-market parts are cost-effectively performing data processing and simulation tasks. The Supercomputing (now known as "SC") series of conferences celebrated its 10th anniversary last November. While vendors have come and gone, the dominant paradigm for tackling big problems still is a shared-resource, commercial supercomputer. Growing numbers of users needing a cheaper or dedicated-access alternative are building their own supercomputers out of mass-market parts. Such machines are generally called Beowulf-class systems after the 11th century epic. This modern-day Beowulf story began in 1994 at NASA's Goddard Space Flight Center. A laboratory for the Earth and space sciences, computing managers there threw down a gauntlet to develop a $50,000 gigaFLOPS workstation for processing satellite data sets. Soon, Thomas Sterling and Don Becker were working on the Beowulf concept at the University Space Research Association (USRA)-run Center of Excellence in Space Data and Information Sciences (CESDIS). Beowulf clusters mix three primary ingredients: commodity personal computers or workstations, low-cost Ethernet networks, and the open-source Linux operating system. One of the larger Beowulfs is Goddard's Highly-parallel Integrated Virtual Environment, or HIVE for short.
NASA Astrophysics Data System (ADS)
Washington-Allen, R. A.; Fatoyinbo, T. E.; Ribeiro, N. S.; Shugart, H. H.; Therrell, M. D.; Vaz, K. T.; von Schill, L.
2006-12-01
A workshop titled: Environmental Remote Sensing for Natural Resources Management was held from June 12 23, 2006 at Eduardo Mondlane University in Maputo Mozambique. The workshop was initiated through an invitation and pre-course evaluation form to interested NGOs, universities, and government organizations. The purpose of the workshop was to provide training to interested professionals, graduate students, faculty and researchers at Mozambican institutions on the research and practical uses of remote sensing for natural resource management. The course had 24 participants who were predominantly professionals in remote sensing and GIS from various NGOs, governmental and academic institutions in Mozambique. The course taught remote sensing from an ecological perspective, specifically the course focused on the application of new remote sensing technology [the Shuttle Radar Topography Mission (SRTM) C-band radar data] to carbon accounting research in Miombo woodlands and Mangrove forests. The 2-week course was free to participants and consisted of lectures, laboratories, and a field trip to the mangrove forests of Inhaca Island, Maputo. The field trip consisted of training in the use of forest inventory techniques in support of remote sensing studies. Specifically, the field workshop centered on use of Global Positioning Systems (GPS) and collection of forest inventory data on tree height, structure [leaf area index (LAI)], and productivity. Productivity studies were enhanced with the teaching of introductory dendrochronology including sample collection of tree rings from four different mangrove species. Students were provided with all course materials including a DVD that contained satellite data (e.g., Landsat and SRTM imagery), ancillary data, lectures, exercises, and remote sensing publications used in the course including a CD from the Environmental Protection Agency's Environmental Photographic Interpretation Center's (EPA-EPIC) program to teach remote sensing and data CDs from NASA's SAFARI 2000 field campaign. Nineteen participants evaluated the effectiveness of the course in regards to the course lectures, instructors, and the field trip. Future workshops should focus more on the individual projects that students are engaged with in their jobs, replace the laboratories computers with workstations geared towards computer intensive image processing software, and the purchase of field remote sensing instrumentation for practical exercises.
Study on the application of mobile internet cloud computing platform
NASA Astrophysics Data System (ADS)
Gong, Songchun; Fu, Songyin; Chen, Zheng
2012-04-01
The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.
Building laboratory capacity to support HIV care in Nigeria: Harvard/APIN PEPFAR, 2004–2012
Hamel, Donald J.; Sankalé, Jean-Louis; Samuels, Jay Osi; Sarr, Abdoulaye D.; Chaplin, Beth; Ofuche, Eke; Meloni, Seema T.; Okonkwo, Prosper; Kanki, Phyllis J.
2015-01-01
Introduction From 2004–2012, the Harvard/AIDS Prevention Initiative in Nigeria, funded through the US President’s Emergency Plan for AIDS Relief programme, scaled up HIV care and treatment services in Nigeria. We describe the methodologies and collaborative processes developed to improve laboratory capacity significantly in a resource-limited setting. These methods were implemented at 35 clinic and laboratory locations. Methods Systems were established and modified to optimise numerous laboratory processes. These included strategies for clinic selection and management, equipment and reagent procurement, supply chains, laboratory renovations, equipment maintenance, electronic data management, quality development programmes and trainings. Results Over the eight-year programme, laboratories supported 160 000 patients receiving HIV care in Nigeria, delivering over 2.5 million test results, including regular viral load quantitation. External quality assurance systems were established for CD4+ cell count enumeration, blood chemistries and viral load monitoring. Laboratory equipment platforms were improved and standardised and use of point-of-care analysers was expanded. Laboratory training workshops supported laboratories toward increasing staff skills and improving overall quality. Participation in a World Health Organisation-led African laboratory quality improvement system resulted in significant gains in quality measures at five laboratories. Conclusions Targeted implementation of laboratory development processes, during simultaneous scale-up of HIV treatment programmes in a resource-limited setting, can elicit meaningful gains in laboratory quality and capacity. Systems to improve the physical laboratory environment, develop laboratory staff, create improvements to reduce costs and increase quality are available for future health and laboratory strengthening programmes. We hope that the strategies employed may inform and encourage the development of other laboratories in resource-limited settings. PMID:26900573
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
Computer-Based Resource Accounting Model for Automobile Technology Impact Assessment
DOT National Transportation Integrated Search
1976-10-01
A computer-implemented resource accounting model has been developed for assessing resource impacts of future automobile technology options. The resources tracked are materials, energy, capital, and labor. The model has been used in support of the Int...
Microdot - A Four-Bit Microcontroller Designed for Distributed Low-End Computing in Satellites
NASA Astrophysics Data System (ADS)
2002-03-01
Many satellites are an integrated collection of sensors and actuators that require dedicated real-time control. For single processor systems, additional sensors require an increase in computing power and speed to provide the multi-tasking capability needed to service each sensor. Faster processors cost more and consume more power, which taxes a satellite's power resources and may lead to shorter satellite lifetimes. An alternative design approach is a distributed network of small and low power microcontrollers designed for space that handle the computing requirements of each individual sensor and actuator. The design of microdot, a four-bit microcontroller for distributed low-end computing, is presented. The design is based on previous research completed at the Space Electronics Branch, Air Force Research Laboratory (AFRL/VSSE) at Kirtland AFB, NM, and the Air Force Institute of Technology at Wright-Patterson AFB, OH. The Microdot has 29 instructions and a 1K x 4 instruction memory. The distributed computing architecture is based on the Philips Semiconductor I2C Serial Bus Protocol. A prototype was implemented and tested using an Altera Field Programmable Gate Array (FPGA). The prototype was operable to 9.1 MHz. The design was targeted for fabrication in a radiation-hardened-by-design gate-array cell library for the TSMC 0.35 micrometer CMOS process.
System Resource Allocations | High-Performance Computing | NREL
Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation
Computers as learning resources in the health sciences: impact and issues.
Ellis, L B; Hannigan, G G
1986-01-01
Starting with two computer terminals in 1972, the Health Sciences Learning Resources Center of the University of Minnesota Bio-Medical Library expanded its instructional facilities to ten terminals and thirty-five microcomputers by 1985. Computer use accounted for 28% of total center circulation. The impact of these resources on health sciences curricula is described and issues related to use, support, and planning are raised and discussed. Judged by their acceptance and educational value, computers are successful health sciences learning resources at the University of Minnesota. PMID:3518843
An emulator for minimizing finite element analysis implementation resources
NASA Technical Reports Server (NTRS)
Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.
1982-01-01
A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.
NASA Astrophysics Data System (ADS)
Aneri, Parikh; Sumathy, S.
2017-11-01
Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.
The BioExtract Server: a web-based bioinformatic workflow platform
Lushbough, Carol M.; Jennewein, Douglas M.; Brendel, Volker P.
2011-01-01
The BioExtract Server (bioextract.org) is an open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet. PMID:21546552
Exploring the changing learning environment of the gross anatomy lab.
Hopkins, Robin; Regehr, Glenn; Wilson, Timothy D
2011-07-01
The objective of this study was to assess the impact of virtual models and prosected specimens in the context of the gross anatomy lab. In 2009, student volunteers from an undergraduate anatomy class were randomly assigned to study groups in one of three learning conditions. All groups studied the muscles of mastication and completed identical learning objectives during a 45-minute lab. All groups were provided with two reference atlases. Groups were distinguished by the type of primary tools they were provided: gross prosections, three-dimensional stereoscopic computer model, or both resources. The facilitator kept observational field notes. A prepost multiple-choice knowledge test was administered to evaluate students' learning. No significant effect of the laboratory models was demonstrated between groups on the prepost assessment of knowledge. Recurring observations included students' tendency to revert to individual memorization prior to the posttest, rotation of models to match views in the provided atlas, and dissemination of groups into smaller working units. The use of virtual lab resources seemed to influence the social context and learning environment of the anatomy lab. As computer-based learning methods are implemented and studied, they must be evaluated beyond their impact on knowledge gain to consider the effect technology has on students' social development.
Multiscale tomographic analysis of heterogeneous cast Al-Si-X alloys.
Asghar, Z; Requena, G; Sket, F
2015-07-01
The three-dimensional microstructure of cast AlSi12Ni and AlSi10Cu5Ni2 alloys is investigated by laboratory X-ray computed tomography, synchrotron X-ray computed microtomography, light optical tomography and synchrotron X-ray computed microtomography with submicrometre resolution. The results obtained with each technique are correlated with the size of the scanned volumes and resolved microstructural features. Laboratory X-ray computed tomography is sufficient to resolve highly absorbing aluminides but eutectic and primary Si remain unrevealed. Synchrotron X-ray computed microtomography at ID15/ESRF gives better spatial resolution and reveals primary Si in addition to aluminides. Synchrotron X-ray computed microtomography at ID19/ESRF reveals all the phases ≥ ∼1 μm in volumes about 80 times smaller than laboratory X-ray computed tomography. The volumes investigated by light optical tomography and submicrometre synchrotron X-ray computed microtomography are much smaller than laboratory X-ray computed tomography but both techniques provide local chemical information on the types of aluminides. The complementary techniques applied enable a full three-dimensional characterization of the microstructure of the alloys at length scales ranging over six orders of magnitude. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
SCEAPI: A unified Restful Web API for High-Performance Computing
NASA Astrophysics Data System (ADS)
Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi
2017-10-01
The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.
NATURAL RESOURCE MANAGEMENT PLAN FOR BROOKHAVEN NATIONAL LABORATORY.
DOE Office of Scientific and Technical Information (OSTI.GOV)
GREEN,T.ET AL.
2003-12-31
Brookhaven National Laboratory (BNL) is located near the geographic center of Long Island, New York. The Laboratory is situated on 5,265 acres of land composed of Pine Barrens habitat with a central area developed for Laboratory work. In the mid-1990s BNL began developing a wildlife management program. This program was guided by the Wildlife Management Plan (WMP), which was reviewed and approved by various state and federal agencies in September 1999. The WMP primarily addressed concerns with the protection of New York State threatened, endangered, or species of concern, as well as deer populations, invasive species management, and the revegetationmore » of the area surrounding the Relativistic Heavy Ion Collider (RHIC). The WMP provided a strong and sound basis for wildlife management and established a basis for forward motion and the development of this document, the Natural Resource Management Plan (NRMP), which will guide the natural resource management program for BNL. The body of this plan establishes the management goals and actions necessary for managing the natural resources at BNL. The appendices provide specific management requirements for threatened and endangered amphibians and fish (Appendices A and B respectively), lists of actions in tabular format (Appendix C), and regulatory drivers for the Natural Resource Program (Appendix D). The purpose of the Natural Resource Management Plan is to provide management guidance, promote stewardship of the natural resources found at BNL, and to integrate their protection with pursuit of the Laboratory's mission. The philosophy or guiding principles of the NRMP are stewardship, adaptive ecosystem management, compliance, integration with other plans and requirements, and incorporation of community involvement, where applicable.« less
High Precision Prediction of Functional Sites in Protein Structures
Buturovic, Ljubomir; Wong, Mike; Tang, Grace W.; Altman, Russ B.; Petkovic, Dragutin
2014-01-01
We address the problem of assigning biological function to solved protein structures. Computational tools play a critical role in identifying potential active sites and informing screening decisions for further lab analysis. A critical parameter in the practical application of computational methods is the precision, or positive predictive value. Precision measures the level of confidence the user should have in a particular computed functional assignment. Low precision annotations lead to futile laboratory investigations and waste scarce research resources. In this paper we describe an advanced version of the protein function annotation system FEATURE, which achieved 99% precision and average recall of 95% across 20 representative functional sites. The system uses a Support Vector Machine classifier operating on the microenvironment of physicochemical features around an amino acid. We also compared performance of our method with state-of-the-art sequence-level annotator Pfam in terms of precision, recall and localization. To our knowledge, no other functional site annotator has been rigorously evaluated against these key criteria. The software and predictive models are incorporated into the WebFEATURE service at http://feature.stanford.edu/wf4.0-beta. PMID:24632601
Software and resources for computational medicinal chemistry
Liao, Chenzhong; Sitzmann, Markus; Pugliese, Angelo; Nicklaus, Marc C
2011-01-01
Computer-aided drug design plays a vital role in drug discovery and development and has become an indispensable tool in the pharmaceutical industry. Computational medicinal chemists can take advantage of all kinds of software and resources in the computer-aided drug design field for the purposes of discovering and optimizing biologically active compounds. This article reviews software and other resources related to computer-aided drug design approaches, putting particular emphasis on structure-based drug design, ligand-based drug design, chemical databases and chemoinformatics tools. PMID:21707404
Training strategies for laboratory animal veterinarians: challenges and opportunities.
Colby, Lesley A; Turner, Patricia V; Vasbinder, Mary Ann
2007-01-01
The field of laboratory animal medicine is experiencing a serious shortage of appropriately trained veterinarians for both clinically related and research-oriented positions within academia, industry, and government. Recent outreach efforts sponsored by professional organizations have stimulated increased interest in the field. It is an opportune time to critically review and evaluate postgraduate training opportunities in the United States and Canada, including formal training programs, informal training, publicly accessible training resources and educational opportunities, and newly emerging training resources such as Internet-based learning aids. Challenges related to each of these training opportunities exist and include increasing enrollment in formal programs, securing adequate funding support, ensuring appropriate content between formal programs that may have diverse objectives, and accommodating the training needs of veterinarians who enter the field by the experience route. Current training opportunities and resources that exist for veterinarians who enter and are established within the field of laboratory animal science are examined. Strategies for improving formal laboratory animal medicine training programs and for developing alternative programs more suited to practicing clinical veterinarians are discussed. In addition, the resources for high-quality continuing education of experienced laboratory animal veterinarians are reviewed.
The Laboratory-Based Economics Curriculum.
ERIC Educational Resources Information Center
King, Paul G.; LaRoe, Ross M.
1991-01-01
Describes the liberal arts, computer laboratory-based economics program at Denison University (Ohio). Includes as goals helping students to (1) understand deductive arguments, (2) learn to apply theory in real-world situations, and (3) test and modify theory when necessary. Notes that the program combines computer laboratory experiments for…
Teaching Cardiovascular Integrations with Computer Laboratories.
ERIC Educational Resources Information Center
Peterson, Nils S.; Campbell, Kenneth B.
1985-01-01
Describes a computer-based instructional unit in cardiovascular physiology. The program (which employs simulated laboratory experimental techniques with a problem-solving format is designed to supplement an animal laboratory and to offer students an integrative approach to physiology through use of microcomputers. Also presents an overview of the…
2011 Computation Directorate Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L
2012-04-11
From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilitiesmore » and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.« less
Computer-Aided Drug Discovery: Molecular Docking of Diminazene Ligands to DNA Minor Groove
ERIC Educational Resources Information Center
Kholod, Yana; Hoag, Erin; Muratore, Katlynn; Kosenkov, Dmytro
2018-01-01
The reported project-based laboratory unit introduces upper-division undergraduate students to the basics of computer-aided drug discovery as a part of a computational chemistry laboratory course. The students learn to perform model binding of organic molecules (ligands) to the DNA minor groove with computer-aided drug discovery (CADD) tools. The…
Closely Spaced Independent Parallel Runway Simulation.
1984-10-01
facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where
Computer validation in toxicology: historical review for FDA and EPA good laboratory practice.
Brodish, D L
1998-01-01
The application of computer validation principles to Good Laboratory Practice is a fairly recent phenomenon. As automated data collection systems have become more common in toxicology facilities, the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency have begun to focus inspections in this area. This historical review documents the development of regulatory guidance on computer validation in toxicology over the past several decades. An overview of the components of a computer life cycle is presented, including the development of systems descriptions, validation plans, validation testing, system maintenance, SOPs, change control, security considerations, and system retirement. Examples are provided for implementation of computer validation principles on laboratory computer systems in a toxicology facility.
Collaborative Systems Biology Projects for the Military Medical Community.
Zalatoris, Jeffrey J; Scheerer, Julia B; Lebeda, Frank J
2017-09-01
This pilot study was conducted to examine, for the first time, the ongoing systems biology research and development projects within the laboratories and centers of the U.S. Army Medical Research and Materiel Command (USAMRMC). The analysis has provided an understanding of the breadth of systems biology activities, resources, and collaborations across all USAMRMC subordinate laboratories. The Systems Biology Collaboration Center at USAMRMC issued a survey regarding systems biology research projects to the eight U.S.-based USAMRMC laboratories and centers in August 2016. This survey included a data call worksheet to gather self-identified project and programmatic information. The general topics focused on the investigators and their projects, on the project's research areas, on omics and other large data types being collected and stored, on the analytical or computational tools being used, and on identifying intramural (i.e., USAMRMC) and extramural collaborations. Among seven of the eight laboratories, 62 unique systems biology studies were funded and active during the final quarter of fiscal year 2016. Of 29 preselected medical Research Task Areas, 20 were associated with these studies, some of which were applicable to two or more Research Task Areas. Overall, studies were categorized among six general types of objectives: biological mechanisms of disease, risk of/susceptibility to injury or disease, innate mechanisms of healing, diagnostic and prognostic biomarkers, and host/patient responses to vaccines, and therapeutic strategies including host responses to therapies. We identified eight types of omics studies and four types of study subjects. Studies were categorized on a scale of increasing complexity from single study subject/single omics technology studies (23/62) to studies integrating results across two study subject types and two or more omics technologies (13/62). Investigators at seven USAMRMC laboratories had collaborations with systems biology experts from 18 extramural organizations and three other USAMRMC laboratories. Collaborators from six USAMRMC laboratories and 58 extramural organizations were identified who provided additional research expertise to these systems biology studies. At the end of fiscal year 2016, USAMRMC laboratories self-reported 66 systems biology/computational biology studies (62 of which were unique) with 25 intramural and 81 extramural collaborators. Nearly two-thirds were led by or in collaboration with the U.S. Army Telemedicine and Advanced Technology Research Center/Department of Defense Biotechnology High-Performance Computing Software Applications Institute and U.S. Army Center for Environmental Health Research. The most common study objective addressed biological mechanisms of disease. The most common types of Research Task Areas addressed infectious diseases (viral and bacterial) and chemical agents (environmental toxicant exposures, and traditional and emerging chemical threats). More than 40% of the studies (27/62) involved collaborations between the reporting USAMRMC laboratory and one other organization. Nearly half of the studies (30/62) involved collaborations between the reporting USAMRMC laboratory and at least two other organizations. These survey results indicate that USAMRMC laboratories are compliant with data-centric policy and guidance documents whose goals are to prevent redundancy and promote collaborations by sharing data and leveraging capabilities. These results also serve as a foundation to make recommendations for future systems biology research efforts. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Insights on WWW-based geoscience teaching: Climbing the first year learning cliff
NASA Astrophysics Data System (ADS)
Lamberson, Michelle N.; Johnson, Mark; Bevier, Mary Lou; Russell, J. Kelly
1997-06-01
In early 1995, The University of British Columbia Department of Geological Sciences (now Earth and Ocean Sciences) initiated a project that explored the effectiveness of the World Wide Web as a teaching and learning medium. Four decisions made at the onset of the project have guided the department's educational technology plan: (1) over 90% of funding recieved from educational technology grants was committed towards personnel; (2) materials developed are modular in design; (3) a data-base approach was taken to resource development; and (4) a strong commitment to student involvement in courseware development. The project comprised development of a web site for an existing core course: Geology 202, Introduction to Petrology. The web site is a gateway to course information, content, resources, exercises, and several searchable data-bases (images, petrologic definitions, and minerals in thin section). Material was developed on either an IBM or UNIX machine, ported to a UNIX platform, and is accessed using the Netscape browser. The resources consist primarily of HTML files or CGI scripts with associated text, images, sound, digital movies, and animations. Students access the web site from the departmental student computer facility, from home or a computer station in the petrology laboratory. Results of a survey of the Geol 202 students indicate that they found the majority of the resources useful, and the site is being expanded. The Geology 202 project had a "trickle-up" effect throughout the department: prior to this project, there was minimal use of Internet resources in lower-level geology courses. By the end of the 1996-1997 academic year, we anticipate that at least 17 Earth and Ocean Science courses will have a WWW site for one or all of the following uses: (1) presenting basic information; (2) accessing lecture images; (3) providing a jumping-off point for exploring related WWW sites; (4) conducting on-line exercises; and/or (5) providing a communications forum for students and faculty via a Hypernews group. Url http://www.science.ubc.ca/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjoreen, Terrence P
The Oak Ridge National Laboratory (ORNL) Laboratory Directed Research and Development (LDRD) Program reports its status to the U.S. Department of Energy (DOE) in March of each year. The program operates under the authority of DOE Order 413.2A, 'Laboratory Directed Research and Development' (January 8, 2001), which establishes DOE's requirements for the program while providing the Laboratory Director broad flexibility for program implementation. LDRD funds are obtained through a charge to all Laboratory programs. This report describes all ORNL LDRD research activities supported during FY 2005 and includes final reports for completed projects and shorter progress reports for projects thatmore » were active, but not completed, during this period. The FY 2005 ORNL LDRD Self-Assessment (ORNL/PPA-2006/2) provides financial data about the FY 2005 projects and an internal evaluation of the program's management process. ORNL is a DOE multiprogram science, technology, and energy laboratory with distinctive capabilities in materials science and engineering, neutron science and technology, energy production and end-use technologies, biological and environmental science, and scientific computing. With these capabilities ORNL conducts basic and applied research and development (R&D) to support DOE's overarching national security mission, which encompasses science, energy resources, environmental quality, and national nuclear security. As a national resource, the Laboratory also applies its capabilities and skills to the specific needs of other federal agencies and customers through the DOE Work For Others (WFO) program. Information about the Laboratory and its programs is available on the Internet at
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjoreen, Terrence P
The Oak Ridge National Laboratory (ORNL) Laboratory Directed Research and Development (LDRD) Program reports its status to the U.S. Department of Energy (DOE) in March of each year. The program operates under the authority of DOE Order 413.2A, 'Laboratory Directed Research and Development' (January 8, 2001), which establishes DOE's requirements for the program while providing the Laboratory Director broad flexibility for program implementation. LDRD funds are obtained through a charge to all Laboratory programs. This report describes all ORNL LDRD research activities supported during FY 2004 and includes final reports for completed projects and shorter progress reports for projects thatmore » were active, but not completed, during this period. The FY 2004 ORNL LDRD Self-Assessment (ORNL/PPA-2005/2) provides financial data about the FY 2004 projects and an internal evaluation of the program's management process. ORNL is a DOE multiprogram science, technology, and energy laboratory with distinctive capabilities in materials science and engineering, neutron science and technology, energy production and end-use technologies, biological and environmental science, and scientific computing. With these capabilities ORNL conducts basic and applied research and development (R&D) to support DOE's overarching national security mission, which encompasses science, energy resources, environmental quality, and national nuclear security. As a national resource, the Laboratory also applies its capabilities and skills to the specific needs of other federal agencies and customers through the DOE Work For Others (WFO) program. Information about the Laboratory and its programs is available on the Internet at
Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service
NASA Astrophysics Data System (ADS)
Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.;
2017-10-01
With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.
Laboratory and software applications for clinical trials: the global laboratory environment.
Briscoe, Chad
2011-11-01
The Applied Pharmaceutical Software Meeting is held annually. It is sponsored by The Boston Society, a not-for-profit organization that coordinates a series of meetings within the global pharmaceutical industry. The meeting generally focuses on laboratory applications, but in recent years has expanded to include some software applications for clinical trials. The 2011 meeting emphasized the global laboratory environment. Global clinical trials generate massive amounts of data in many locations that must be centralized and processed for efficient analysis. Thus, the meeting had a strong focus on establishing networks and systems for dealing with the computer infrastructure to support such environments. In addition to the globally installed laboratory information management system, electronic laboratory notebook and other traditional laboratory applications, cloud computing is quickly becoming the answer to provide efficient, inexpensive options for managing the large volumes of data and computing power, and thus it served as a central theme for the meeting.
Integration of Cloud resources in the LHCb Distributed Computing
NASA Astrophysics Data System (ADS)
Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel
2014-06-01
This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.
Simulating Laboratory Procedures.
ERIC Educational Resources Information Center
Baker, J. E.; And Others
1986-01-01
Describes the use of computer assisted instruction in a medical microbiology course. Presents examples of how computer assisted instruction can present case histories in which the laboratory procedures are simulated. Discusses an authoring system used to prepare computer simulations and provides one example of a case history dealing with fractured…
Environmental resource document for the Idaho National Engineering Laboratory. Volume 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irving, J.S.
This document contains information related to the environmental characterization of the Idaho National Engineering Laboratory (INEL). The INEL is a major US Department of Energy facility in southeastern Idaho dedicated to nuclear research, waste management, environmental restoration, and other activities related to the development of technology. Environmental information covered in this document includes land, air, water, and ecological resources; socioeconomic characteristics and land use; and cultural, aesthetic, and scenic resources.
Survey of ecological resources at selected US Department of Energy sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
McAllister, C.; Beckert, H.; Abrams, C.
The U.S. Department of Energy (DOE) owns and manages a wide range of ecological resources. During the next 30 years, DOE Headquarters and Field Offices will make land-use planning decisions and conduct environmental remediation and restoration activities in response to federal and state statutes. This document fulfills, in part, DOE`s need to know what types of ecological resources it currently owns and manages by synthesizing information on the types and locations of ecological resources at 10 DOE sites: Hanford Site, Idaho National Engineering Laboratory, Lawrence Livermore National Laboratory, Sandia National Laboratory, Rocky Flats Plant, Los Alamos National Laboratory, savannah Rivermore » Site, Oak Ridge National Laboratory, Argonne National Laboratory, and Fernald Environmental Management Project. This report summarizes information on ecosystems, habitats, and federally listed threatened, endangered, and candidate species that could be stressed by contaminants or physical activity during the restoration process, or by the natural or anthropogenic transport of contaminants from presently contaminated areas into presently uncontaminated areas. This report also provides summary information on the ecosystems, habitats, and threatened and endangered species that exist on each of the 10 sites. Each site chapter contains a general description of the site, including information on size, location, history, geology, hydrology, and climate. Descriptions of the major vegetation and animal communities and of aquatic resources are also provided, with discussions of the treatened or endangered plant or animal species present. Site-specific ecological issues are also discussed in each site chapter. 106 refs., 11 figs., 1 tab.« less
18 CFR 367.3950 - Account 395, Laboratory equipment.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Account 395, Laboratory equipment. 367.3950 Section 367.3950 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO...
. Protecting the present through resource management The Laboratory actively manages and protects resources on shared land. Through biological monitoring, the Laboratory strives to minimize operational impacts to plants and animals. Through collaboration with its stakeholders and local tribal governments, the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Dean N.
2011-04-02
This report summarizes work carried out by the Earth System Grid Center for Enabling Technologies (ESG-CET) from October 1, 2010 through March 31, 2011. It discusses ESG-CET highlights for the reporting period, overall progress, period goals, and collaborations, and lists papers and presentations. To learn more about our project and to find previous reports, please visit the ESG-CET Web sites: http://esg-pcmdi.llnl.gov/ and/or https://wiki.ucar.edu/display/esgcet/Home. This report will be forwarded to managers in the Department of Energy (DOE) Scientific Discovery through Advanced Computing (SciDAC) program and the Office of Biological and Environmental Research (OBER), as well as national and international collaborators andmore » stakeholders (e.g., those involved in the Coupled Model Intercomparison Project, phase 5 (CMIP5) for the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5); the Community Earth System Model (CESM); the Climate Science Computational End Station (CCES); SciDAC II: A Scalable and Extensible Earth System Model for Climate Change Science; the North American Regional Climate Change Assessment Program (NARCCAP); the Atmospheric Radiation Measurement (ARM) program; the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA)), and also to researchers working on a variety of other climate model and observation evaluation activities. The ESG-CET executive committee consists of Dean N. Williams, Lawrence Livermore National Laboratory (LLNL); Ian Foster, Argonne National Laboratory (ANL); and Don Middleton, National Center for Atmospheric Research (NCAR). The ESG-CET team is a group of researchers and scientists with diverse domain knowledge, whose home institutions include eight laboratories and two universities: ANL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), LLNL, NASA/Jet Propulsion Laboratory (JPL), NCAR, Oak Ridge National Laboratory (ORNL), Pacific Marine Environmental Laboratory (PMEL)/NOAA, Rensselaer Polytechnic Institute (RPI), and University of Southern California, Information Sciences Institute (USC/ISI). All ESG-CET work is accomplished under DOE open-source guidelines and in close collaboration with the project's stakeholders, domain researchers, and scientists. Through the ESG project, the ESG-CET team has developed and delivered a production environment for climate data from multiple climate model sources (e.g., CMIP (IPCC), CESM, ocean model data (e.g., Parallel Ocean Program), observation data (e.g., Atmospheric Infrared Sounder, Microwave Limb Sounder), and analysis and visualization tools) that serves a worldwide climate research community. Data holdings are distributed across multiple sites including LANL, LBNL, LLNL, NCAR, and ORNL as well as unfunded partners sites such as the Australian National University (ANU) National Computational Infrastructure (NCI), the British Atmospheric Data Center (BADC), the Geophysical Fluid Dynamics Laboratory/NOAA, the Max Planck Institute for Meteorology (MPI-M), the German Climate Computing Centre (DKRZ), and NASA/JPL. As we transition from development activities to production and operations, the ESG-CET team is tasked with making data available to all users who want to understand it, process it, extract value from it, visualize it, and/or communicate it to others. This ongoing effort is extremely large and complex, but it will be incredibly valuable for building 'science gateways' to critical climate resources (such as CESM, CMIP5, ARM, NARCCAP, Atmospheric Infrared Sounder (AIRS), etc.) for processing the next IPCC assessment report. Continued ESG progress will result in a production-scale system that will empower scientists to attempt new and exciting data exchanges, which could ultimately lead to breakthrough climate science discoveries.« less
Design & implementation of distributed spatial computing node based on WPS
NASA Astrophysics Data System (ADS)
Liu, Liping; Li, Guoqing; Xie, Jibo
2014-03-01
Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.
Economic models for management of resources in peer-to-peer and grid computing
NASA Astrophysics Data System (ADS)
Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David
2001-07-01
The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.
NASA Astrophysics Data System (ADS)
Ojaghi, Mobin; Martínez, Ignacio Lamata; Dietz, Matt S.; Williams, Martin S.; Blakeborough, Anthony; Crewe, Adam J.; Taylor, Colin A.; Madabhushi, S. P. Gopal; Haigh, Stuart K.
2018-01-01
Distributed Hybrid Testing (DHT) is an experimental technique designed to capitalise on advances in modern networking infrastructure to overcome traditional laboratory capacity limitations. By coupling the heterogeneous test apparatus and computational resources of geographically distributed laboratories, DHT provides the means to take on complex, multi-disciplinary challenges with new forms of communication and collaboration. To introduce the opportunity and practicability afforded by DHT, here an exemplar multi-site test is addressed in which a dedicated fibre network and suite of custom software is used to connect the geotechnical centrifuge at the University of Cambridge with a variety of structural dynamics loading apparatus at the University of Oxford and the University of Bristol. While centrifuge time-scaling prevents real-time rates of loading in this test, such experiments may be used to gain valuable insights into physical phenomena, test procedure and accuracy. These and other related experiments have led to the development of the real-time DHT technique and the creation of a flexible framework that aims to facilitate future distributed tests within the UK and beyond. As a further example, a real-time DHT experiment between structural labs using this framework for testing across the Internet is also presented.
Rhoads, Daniel D.; Mathison, Blaine A.; Bishop, Henry S.; da Silva, Alexandre J.; Pantanowitz, Liron
2016-01-01
Context Microbiology laboratories are continually pursuing means to improve quality, rapidity, and efficiency of specimen analysis in the face of limited resources. One means by which to achieve these improvements is through the remote analysis of digital images. Telemicrobiology enables the remote interpretation of images of microbiology specimens. To date, the practice of clinical telemicrobiology has not been thoroughly reviewed. Objective Identify the various methods that can be employed for telemicrobiology, including emerging technologies that may provide value to the clinical laboratory. Data Sources Peer-reviewed literature, conference proceedings, meeting presentations, and expert opinions pertaining to telemicrobiology have been evaluated. Results A number of modalities have been employed for telemicroscopy including static capture techniques, whole slide imaging, video telemicroscopy, mobile devices, and hybrid systems. Telemicrobiology has been successfully implemented for applications including routine primary diagnois, expert teleconsultation, and proficiency testing. Emerging areas include digital culture plate reading, mobile health applications and computer-augmented analysis of digital images. Conclusions Static image capture techniques to date have been the most widely used modality for telemicrobiology, despite the fact that other newer technologies are available and may produce better quality interpretations. Increased adoption of telemicrobiology offers added value, quality, and efficiency to the clinical microbiology laboratory. PMID:26317376
Virtual experiments in electronics: beyond logistics, budgets, and the art of the possible
NASA Astrophysics Data System (ADS)
Chapman, Brian
1999-09-01
It is common and correct to suppose that computers support flexible delivery of educational resources by offering virtual experiments that replicate and substitute for experiments traditionally offered in conventional teaching laboratories. However, traditional methods are limited by logistics, costs, and what is physically possible to accomplish on a laboratory bench. Virtual experiments allow experimental approaches to teaching and learning to transcend these limits. This paper analyses recent and current developments in educational software for 1st- year physics, 2nd-year electronics engineering and 3rd-year communication engineering, based on three criteria: (1)Is the virtual experiment possible in a real laboratory? (2)How direct is the link between the experimental manipulation and the reinforcement of theoretical learning? (3) What impact might the virtual experiment have on the learner's acquisition of practical measurement skills? Virtual experiments allow more flexibility in the directness of the link between experimental manipulation and the theoretical message. However, increasing the directness of this link may reduce or even abolish the measurement processes associated with traditional experiments. Virtual experiments thus pose educational challenges: (a) expanding the design of experimentally based curricula beyond traditional boundaries and (b) ensuring that the learner acquires sufficient experience in making practical measurements.
Laboratory for Atmospheres: Philosophy, Organization, Major Activities, and 2001 Highlights
NASA Technical Reports Server (NTRS)
Hoegy, Walter R.; Cote, Charles, E.
2002-01-01
How can we improve our ability to predict the weather? How is the Earth's climate changing? What can the atmospheres of other planets teach us about our own? The Laboratory for Atmospheres is helping to answer these and other scientific questions. The Laboratory conducts a broad theoretical and experimental research program studying all aspects of the atmospheres of the Earth and other planets, including their structural, dynamical, radiative, and chemical properties. Vigorous research is central to NASA's exploration of the frontiers of knowledge. NASA scientists play a key role in conceiving new space missions, providing mission requirements., and carrying out research to explore the behavior of planetary systems, including, notably, the Earth's. Our Laboratory's scientists also supply outside scientists with technical assistance and scientific data to further investigations not immediately addressed by NASA itself. The Laboratory for Atmospheres is a vital participant in NASA's research program. The Laboratory is part of the Earth Sciences Directorate based at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The Directorate itself comprises the Global Change Data Center; the Earth and Space Data Computing Division; three laboratories: the Laboratory for Atmospheres, the Laboratory for Terrestrial Physics, and the Laboratory for Hydrospheric Processes; and the Goddard Institute for Space Studies (GISS) in New York, New York. In this report, you will find a statement of our philosophy and a description of our role in NASA's mission. You'll also find a broad description of our research and a summary of our scientists' major accomplishments in 2001. The report also presents useful information on human resources, scientific interactions, and outreach activities with the outside community. For your convenience, we have published a version of this report on the Internet. Our Web site includes links to additional information about the Laboratory's Offices and Branches. You can find us on the World Wide Web at http://atmospheres.gsfc.nasa.gov.
Panchabhai, T S; Dangayach, N S; Mehta, V S; Patankar, C V; Rege, N N
2011-01-01
Computer usage capabilities of medical students for introduction of computer-aided learning have not been adequately assessed. Cross-sectional study to evaluate computer literacy among medical students. Tertiary care teaching hospital in Mumbai, India. Participants were administered a 52-question questionnaire, designed to study their background, computer resources, computer usage, activities enhancing computer skills, and attitudes toward computer-aided learning (CAL). The data was classified on the basis of sex, native place, and year of medical school, and the computer resources were compared. The computer usage and attitudes toward computer-based learning were assessed on a five-point Likert scale, to calculate Computer usage score (CUS - maximum 55, minimum 11) and Attitude score (AS - maximum 60, minimum 12). The quartile distribution among the groups with respect to the CUS and AS was compared by chi-squared tests. The correlation between CUS and AS was then tested. Eight hundred and seventy-five students agreed to participate in the study and 832 completed the questionnaire. One hundred and twenty eight questionnaires were excluded and 704 were analyzed. Outstation students had significantly lesser computer resources as compared to local students (P<0.0001). The mean CUS for local students (27.0±9.2, Mean±SD) was significantly higher than outstation students (23.2±9.05). No such difference was observed for the AS. The means of CUS and AS did not differ between males and females. The CUS and AS had positive, but weak correlations for all subgroups. The weak correlation between AS and CUS for all students could be explained by the lack of computer resources or inadequate training to use computers for learning. Providing additional resources would benefit the subset of outstation students with lesser computer resources. This weak correlation between the attitudes and practices of all students needs to be investigated. We believe that this gap can be bridged with a structured computer learning program.
A computer-based physics laboratory apparatus: Signal generator software
NASA Astrophysics Data System (ADS)
Thanakittiviroon, Tharest; Liangrocapart, Sompong
2005-09-01
This paper describes a computer-based physics laboratory apparatus to replace expensive instruments such as high-precision signal generators. This apparatus uses a sound card in a common personal computer to give sinusoidal signals with an accurate frequency that can be programmed to give different frequency signals repeatedly. An experiment on standing waves on an oscillating string uses this apparatus. In conjunction with interactive lab manuals, which have been developed using personal computers in our university, we achieve a complete set of low-cost, accurate, and easy-to-use equipment for teaching a physics laboratory.
Lab Manual & Resources for Materials Science, Engineering and Technology on CD-Rom
NASA Technical Reports Server (NTRS)
Jacobs, James A.; McKenney, Alfred E.
2001-01-01
The National Educators' Workshop (NEW:Update) series of workshops has been in existence since 1986. These annual workshops focus on technical updates and laboratory experiments for materials science, engineering and technology, involving new and traditional content in the field. Scores of educators and industrial and national laboratory personnel have contributed many useful experiments and demonstrations which were then published as NASA Conference Proceedings. This "out poring of riches" creates an ever-expanding shelf of valuable teaching tools for college, university, community college and advanced high school instruction. Now, more than 400 experiments and demonstrations, representing the first thirteen years of NEW:Updates have been selected and published on a CD-ROM, through the collaboration of this national network of materials educators, engineers, and scientists. The CD-ROM examined in this document utilizes the popular Adobe Acrobat Reader format and operates on most popular computer platforms. This presentation provides an overview of the second edition of Experiments in Materials Science, Engineering and Technology (EMSET2) CD-ROM, ISBN 0-13-030534-0.
In the white cube: museum context enhances the valuation and memory of art.
Brieber, David; Nadal, Marcos; Leder, Helmut
2015-01-01
Art museum attendance is rising steadily, unchallenged by online alternatives. However, the psychological value of the real museum experience remains unclear because the experience of art in the museum and other contexts has not been compared. Here we examined the appreciation and memory of an art exhibition when viewed in a museum or as a computer simulated version in the laboratory. In line with the postulates of situated cognition, we show that the experience of art relies on organizing resources present in the environment. Specifically, artworks were found more arousing, positive, interesting and liked more in the museum than in the laboratory. Moreover, participants who saw the exhibition in the museum later recalled more artworks and used spatial layout cues for retrieval. Thus, encountering real art in the museum enhances cognitive and affective processes involved in the appreciation of art and enriches information encoded in long-term memory. Copyright © 2014 Elsevier B.V. All rights reserved.
2D Implosion Simulations with a Kinetic Particle Code
NASA Astrophysics Data System (ADS)
Sagert, Irina; Even, Wesley; Strother, Terrance
2017-10-01
Many problems in laboratory and plasma physics are subject to flows that move between the continuum and the kinetic regime. We discuss two-dimensional (2D) implosion simulations that were performed using a Monte Carlo kinetic particle code. The application of kinetic transport theory is motivated, in part, by the occurrence of non-equilibrium effects in inertial confinement fusion (ICF) capsule implosions, which cannot be fully captured by hydrodynamics simulations. Kinetic methods, on the other hand, are able to describe both, continuum and rarefied flows. We perform simple 2D disk implosion simulations using one particle species and compare the results to simulations with the hydrodynamics code RAGE. The impact of the particle mean-free-path on the implosion is also explored. In a second study, we focus on the formation of fluid instabilities from induced perturbations. I.S. acknowledges support through the Director's fellowship from Los Alamos National Laboratory. This research used resources provided by the LANL Institutional Computing Program.
Impact of remote sensing upon the planning, management, and development of water resources
NASA Technical Reports Server (NTRS)
Loats, H. L.; Fowler, T. R.; Frech, S. L.
1974-01-01
A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.
NASA Astrophysics Data System (ADS)
Gordova, Yulia; Okladnikov, Igor; Titov, Alexander; Gordov, Evgeny
2016-04-01
While there is a strong demand for innovation in digital learning, available training programs in the environmental sciences have no time to adapt to rapid changes in the domain content. A joint group of scientists and university teachers develops and implements an educational environment for new learning experiences in basics of climatic science and its applications. This so-called virtual learning laboratory "Climate" contains educational materials and interactive training courses developed to provide undergraduate and graduate students with profound understanding of changes in regional climate and environment. The main feature of this Laboratory is that students perform their computational tasks on climate modeling and evaluation and assessment of climate change using the typical tools of the "Climate" information-computational system, which are usually used by real-life practitioners performing such kind of research. Students have an opportunity to perform computational laboratory works using information-computational tools of the system and improve skills of their usage simultaneously with mastering the subject. We did not create an artificial learning environment to pass the trainings. On the contrary, the main purpose of association of the educational block and computational information system was to familiarize students with the real existing technologies for monitoring and analysis of data on the state of the climate. Trainings are based on technologies and procedures which are typical for Earth system sciences. Educational courses are designed to permit students to conduct their own investigations of ongoing and future climate changes in a manner that is essentially identical to the techniques used by national and international climate research organizations. All trainings are supported by lectures, devoted to the basic aspects of modern climatology, including analysis of current climate change and its possible impacts ensuring effective links between theory and practice. Along with its usage in graduate and postgraduate education, "Climate" is used as a framework for a developed basic information course on climate change for common public. In this course basic concepts and problems of modern climate change and its possible consequences are described for non-specialists. The course will also include links to relevant information resources on topical issues of Earth Sciences and a number of case studies, which are carried out for a selected region to consolidate the received knowledge.
30 CFR 795.10 - Qualified laboratories.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Qualified laboratories. 795.10 Section 795.10... laboratories. (a) Basic qualifications. To be designated a qualified laboratory, a firm shall demonstrate that... necessary field samples and making hydrologic field measurements and analytical laboratory determinations by...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barney, B; Shuler, J
2006-08-21
Purple is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Lawrence Livermore National Laboratory (LLNL). The Purple Computational Environment documents the capabilities and the environment provided for the FY06 LLNL Level 1 General Availability Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratories, but also documents needs of the LLNL and Alliance users working in the unclassified environment. Additionally,more » the Purple Computational Environment maps the provided capabilities to the Trilab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the General Availability user environment capabilities of the ASC community. Appendix A lists these requirements and includes a description of ACE requirements met and those requirements that are not met for each section of this document. The Purple Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the Tri-lab community.« less
Resource Provisioning in SLA-Based Cluster Computing
NASA Astrophysics Data System (ADS)
Xiong, Kaiqi; Suh, Sang
Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.
Acausal measurement-based quantum computing
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki
2014-07-01
In measurement-based quantum computing, there is a natural "causal cone" among qubits of the resource state, since the measurement angle on a qubit has to depend on previous measurement results in order to correct the effect of by-product operators. If we respect the no-signaling principle, by-product operators cannot be avoided. Here we study the possibility of acausal measurement-based quantum computing by using the process matrix framework [Oreshkov, Costa, and Brukner, Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076]. We construct a resource process matrix for acausal measurement-based quantum computing restricting local operations to projective measurements. The resource process matrix is an analog of the resource state of the standard causal measurement-based quantum computing. We find that if we restrict local operations to projective measurements the resource process matrix is (up to a normalization factor and trivial ancilla qubits) equivalent to the decorated graph state created from the graph state of the corresponding causal measurement-based quantum computing. We also show that it is possible to consider a causal game whose causal inequality is violated by acausal measurement-based quantum computing.
Step-by-step magic state encoding for efficient fault-tolerant quantum computation
Goto, Hayato
2014-01-01
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387
Step-by-step magic state encoding for efficient fault-tolerant quantum computation.
Goto, Hayato
2014-12-16
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.
Architecture of a Message-Driven Processor,
1987-11-01
Jon Kaplan, Paul Song, Brian Totty, and Scott Wills Artifcial Intelligence Laboratory -4 Laboratory for Computer Science Massachusetts Institute of...Information Dally, Chao, Chien, Hassoun, Horwat, Kaplan, Song, Totty & Wills: Artificial Intelligence i Laboratory and Laboratory for Computer Science, MIT...applied to a problem if we could are 36 bits long (32 data bits + 4 tag bits) and are used to hold efficiently run programs with a granularity of 5s
A Review of Resources for Evaluating K-12 Computer Science Education Programs
ERIC Educational Resources Information Center
Randolph, Justus J.; Hartikainen, Elina
2004-01-01
Since computer science education is a key to preparing students for a technologically-oriented future, it makes sense to have high quality resources for conducting summative and formative evaluation of those programs. This paper describes the results of a critical analysis of the resources for evaluating K-12 computer science education projects.…
Computing the Envelope for Stepwise Constant Resource Allocations
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Clancy, Daniel (Technical Monitor)
2001-01-01
Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.
NACA Computer at the Lewis Flight Propulsion Laboratory
1951-02-21
A female computer at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory with a slide rule and Friden adding machine to make computations. The computer staff was introduced during World War II to relieve short-handed research engineers of some of the tedious computational work. The Computing Section was staffed by “computers,” young female employees, who often worked overnight when most of the tests were run. The computers obtained test data from the manometers and other instruments, made the initial computations, and plotted the data graphically. Researchers then analyzed the data and summarized the findings in a report or made modifications and ran the test again. There were over 400 female employees at the laboratory in 1944, including 100 computers. The use of computers was originally planned only for the duration of the war. The system was so successful that it was extended into the 1960s. The computers and analysts were located in the Altitude Wind Tunnel Shop and Office Building office wing during the 1940s and transferred to the new 8- by 6-Foot Supersonic Wind Tunnel in 1948.
Oklahoma's Mobile Computer Graphics Laboratory.
ERIC Educational Resources Information Center
McClain, Gerald R.
This Computer Graphics Laboratory houses an IBM 1130 computer, U.C.C. plotter, printer, card reader, two key punch machines, and seminar-type classroom furniture. A "General Drafting Graphics System" (GDGS) is used, based on repetitive use of basic coordinate and plot generating commands. The system is used by 12 institutions of higher education…
A User Assessment of Workspaces in Selected Music Education Computer Laboratories.
ERIC Educational Resources Information Center
Badolato, Michael Jeremy
A study of 120 students selected from the user populations of four music education computer laboratories was conducted to determine the applicability of current ergonomic and environmental design guidelines in satisfying the needs of users of educational computing workspaces. Eleven categories of workspace factors were organized into a…
Mobile Computer-Assisted-Instruction in Rural New Mexico.
ERIC Educational Resources Information Center
Gittinger, Jack D., Jr.
The University of New Mexico's three-year Computer Assisted Instruction Project established one mobile and five permanent laboratories offering remedial and vocational instruction in winter, 1984-85. Each laboratory has a Degem learning system with minicomputer, teacher terminal, and 32 student terminals. A Digital PDP-11 host computer runs the…
A lightweight distributed framework for computational offloading in mobile cloud computing.
Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul
2014-01-01
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.
A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing
Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul
2014-01-01
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245
COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...
This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).
LaRC local area networks to support distributed computing
NASA Technical Reports Server (NTRS)
Riddle, E. P.
1984-01-01
The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.
NASA Technical Reports Server (NTRS)
Bates, Seth P.
1990-01-01
Students are introduced to methods and concepts for systematic selection and evaluation of materials which are to be used to manufacture specific products in industry. For this laboratory exercise, students are asked to work in groups to identify and describe a product, then to proceed through the process to select a list of three candidates to make the item from. The exercise draws on knowledge of mechanical, physical, and chemical properties, common materials test techniques, and resource management skills in finding and assessing property data. A very important part of the exercise is the students' introduction to decision making algorithms, and learning how to apply them to a complex decision making process.
Geothermal reservoir engineering research
NASA Technical Reports Server (NTRS)
Ramey, H. J., Jr.; Kruger, P.; Brigham, W. E.; London, A. L.
1974-01-01
The Stanford University research program on the study of stimulation and reservoir engineering of geothermal resources commenced as an interdisciplinary program in September, 1972. The broad objectives of this program have been: (1) the development of experimental and computational data to evaluate the optimum performance of fracture-stimulated geothermal reservoirs; (2) the development of a geothermal reservoir model to evaluate important thermophysical, hydrodynamic, and chemical parameters based on fluid-energy-volume balances as part of standard reservoir engineering practice; and (3) the construction of a laboratory model of an explosion-produced chimney to obtain experimental data on the processes of in-place boiling, moving flash fronts, and two-phase flow in porous and fractured hydrothermal reservoirs.
A procedure for automated land use mapping using remotely sensed multispectral scanner data
NASA Technical Reports Server (NTRS)
Whitley, S. L.
1975-01-01
A system of processing remotely sensed multispectral scanner data by computer programs to produce color-coded land use maps for large areas is described. The procedure is explained, the software and the hardware are described, and an analogous example of the procedure is presented. Detailed descriptions of the multispectral scanners currently in use are provided together with a summary of the background of current land use mapping techniques. The data analysis system used in the procedure and the pattern recognition software used are functionally described. Current efforts by the NASA Earth Resources Laboratory to evaluate operationally a less complex and less costly system are discussed in a separate section.
NASA Astrophysics Data System (ADS)
Rose, K.; Bauer, J.; Baker, D.; Barkhurst, A.; Bean, A.; DiGiulio, J.; Jones, K.; Jones, T.; Justman, D.; Miller, R., III; Romeo, L.; Sabbatino, M.; Tong, A.
2017-12-01
As spatial datasets are increasingly accessible through open, online systems, the opportunity to use these resources to address a range of Earth system questions grows. Simultaneously, there is a need for better infrastructure and tools to find and utilize these resources. We will present examples of advanced online computing capabilities, hosted in the U.S. DOE's Energy Data eXchange (EDX), that address these needs for earth-energy research and development. In one study the computing team developed a custom, machine learning, big data computing tool designed to parse the web and return priority datasets to appropriate servers to develop an open-source global oil and gas infrastructure database. The results of this spatial smart search approach were validated against expert-driven, manual search results which required a team of seven spatial scientists three months to produce. The custom machine learning tool parsed online, open systems, including zip files, ftp sites and other web-hosted resources, in a matter of days. The resulting resources were integrated into a geodatabase now hosted for open access via EDX. Beyond identifying and accessing authoritative, open spatial data resources, there is also a need for more efficient tools to ingest, perform, and visualize multi-variate, spatial data analyses. Within the EDX framework, there is a growing suite of processing, analytical and visualization capabilities that allow multi-user teams to work more efficiently in private, virtual workspaces. An example of these capabilities are a set of 5 custom spatio-temporal models and data tools that form NETL's Offshore Risk Modeling suite that can be used to quantify oil spill risks and impacts. Coupling the data and advanced functions from EDX with these advanced spatio-temporal models has culminated with an integrated web-based decision-support tool. This platform has capabilities to identify and combine data across scales and disciplines, evaluate potential environmental, social, and economic impacts, highlight knowledge or technology gaps, and reduce uncertainty for a range of `what if' scenarios relevant to oil spill prevention efforts. These examples illustrate EDX's growing capabilities for advanced spatial data search and analysis to support geo-data science needs.
Oak Ridge National Laboratory Institutional Plan, FY 1995--FY 2000
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-11-01
This report discusses the institutional plan for Oak Ridge National Laboratory for the next five years (1995-2000). Included in this report are the: laboratory director`s statement; laboratory mission, vision, and core competencies; laboratory plan; major laboratory initiatives; scientific and technical programs; critical success factors; summaries of other plans; and resource projections.
ERIC Educational Resources Information Center
Edward, Norrie S.
1997-01-01
Evaluates the importance of realism in the screen presentation of the plant in computer-based laboratory simulations for part-time engineering students. Concludes that simulations are less effective than actual laboratories but that realism minimizes the disadvantages. The schematic approach was preferred for ease of use. (AIM)
ERIC Educational Resources Information Center
Bruce, A. Wayne
1986-01-01
Describes reasons for developing combined text and computer assisted instruction (CAI) teaching programs for delivery of continuing education to laboratory professionals, and mechanisms used for developing a CAI program on method evaluation in the clinical laboratory. Results of an evaluation of the software's cost effectiveness and instructional…
ERIC Educational Resources Information Center
Winberg, T. Mikael; Berg, C. Anders R.
2007-01-01
To enhance the learning outcomes achieved by students, learners undertook a computer-simulated activity based on an acid-base titration prior to a university-level chemistry laboratory activity. Students were categorized with respect to their attitudes toward learning. During the laboratory exercise, questions that students asked their assistant…
A Choice of Terminals: Spatial Patterning in Computer Laboratories
ERIC Educational Resources Information Center
Spennemann, Dirk; Cornforth, David; Atkinson, John
2007-01-01
Purpose: This paper seeks to examine the spatial patterns of student use of machines in each laboratory to whether there are underlying commonalities. Design/methodology/approach: The research was carried out by assessing the user behaviour in 16 computer laboratories at a regional university in Australia. Findings: The study found that computers…
A Low Cost Microcomputer Laboratory for Investigating Computer Architecture.
ERIC Educational Resources Information Center
Mitchell, Eugene E., Ed.
1980-01-01
Described is a microcomputer laboratory at the United States Military Academy at West Point, New York, which provides easy access to non-volatile memory and a single input/output file system for 16 microcomputer laboratory positions. A microcomputer network that has a centralized data base is implemented using the concepts of computer network…
NASA Technical Reports Server (NTRS)
Young, Gerald W.; Clemons, Curtis B.
2004-01-01
The focus of this Cooperative Agreement between the Computational Materials Laboratory (CML) of the Processing Science and Technology Branch of the NASA Glenn Research Center (GRC) and the Department of Theoretical and Applied Mathematics at The University of Akron was in the areas of system development of the CML workstation environment, modeling of microgravity and earth-based material processing systems, and joint activities in laboratory projects. These efforts complement each other as the majority of the modeling work involves numerical computations to support laboratory investigations. Coordination and interaction between the modelers, system analysts, and laboratory personnel are essential toward providing the most effective simulations and communication of the simulation results. Toward these means, The University of Akron personnel involved in the agreement worked at the Applied Mathematics Research Laboratory (AMRL) in the Department of Theoretical and Applied Mathematics while maintaining a close relationship with the personnel of the Computational Materials Laboratory at GRC. Network communication between both sites has been established. A summary of the projects we undertook during the time period 9/1/03 - 6/30/04 is included.
An approach to quality and performance control in a computer-assisted clinical chemistry laboratory.
Undrill, P E; Frazer, S C
1979-01-01
A locally developed, computer-based clinical chemistry laboratory system has been in operation since 1970. This utilises a Digital Equipment Co Ltd PDP 12 and an interconnected PDP 8/F computer. Details are presented of the performance and quality control techniques incorporated into the system. Laboratory performance is assessed through analysis of results from fixed-level control sera as well as from cumulative sum methods. At a simple level the presentation may be considered purely indicative, while at a more sophisticated level statistical concepts have been introduced to aid the laboratory controller in decision-making processes. PMID:438340
An approach for heterogeneous and loosely coupled geospatial data distributed computing
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui
2010-07-01
Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.
Masanza, Monica Musenero; Nqobile, Ndlovu; Mukanga, David; Gitta, Sheba Nakacubo
2010-12-03
Laboratory is one of the core capacities that countries must develop for the implementation of the International Health Regulations (IHR[2005]) since laboratory services play a major role in all the key processes of detection, assessment, response, notification, and monitoring of events. While developed countries easily adapt their well-organized routine laboratory services, resource-limited countries need considerable capacity building as many gaps still exist. In this paper, we discuss some of the efforts made by the African Field Epidemiology Network (AFENET) in supporting laboratory capacity development in the Africa region. The efforts range from promoting graduate level training programs to building advanced technical, managerial and leadership skills to in-service short course training for peripheral laboratory staff. A number of specific projects focus on external quality assurance, basic laboratory information systems, strengthening laboratory management towards accreditation, equipment calibration, harmonization of training materials, networking and provision of pre-packaged laboratory kits to support outbreak investigation. Available evidence indicates a positive effect of these efforts on laboratory capacity in the region. However, many opportunities exist, especially to support the roll-out of these projects as well as attending to some additional critical areas such as biosafety and biosecuity. We conclude that AFENET's approach of strengthening national and sub-national systems provide a model that could be adopted in resource-limited settings such as sub-Saharan Africa.
Publications - MIRL Publications Series | Alaska Division of Geological &
Laboratory covering a wide range of research topics associated with mineral, petroleum, and coal resources in Mineral Industry Research Laboratory on results of research on a wide variety of topics. Report - Report a wide range of research topics associated with mining, mineral, petroleum, and coal resources in
Sandia National Laboratories/New Mexico Environmental Baseline update--Revision 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-07-01
This report provides a baseline update to provide the background information necessary for personnel to prepare clear and consise NEPA documentation. The environment of the Sandia National Laboratories is described in this document, including the ecology, meteorology, climatology, seismology, emissions, cultural resources and land use, visual resources, noise pollution, transportation, and socioeconomics.
Telecommunications Handbook: Connecting to NEWTON. Version 1.4.
ERIC Educational Resources Information Center
Baker, Christopher; And Others
This handbook was written for use with the Argonne National Laboratory's electronic bulletin board system (BBS) called NEWTON, which is designed to create an electronic network that will link scientists, teachers, and students with the many diversified resources of the Argonne National Laboratory. The link to Argonne will include such resources as…
Learning Laboratories for Unemployed, Out-of-School Youth. Health Education, Part 2.
ERIC Educational Resources Information Center
New York State Education Dept., Albany. Bureau of Continuing Education Curriculum Development.
The learning activities suggested in this publication supplement those found in the curriculum resource handbook "Learning Laboratories for Unemployed Out-of-School Youth." This phase of the program deals on a practical level with various health problems in short, achievable units. Activities keyed to the curriculum resource handbook and followed…
NASA Center for Computational Sciences: History and Resources
NASA Technical Reports Server (NTRS)
2000-01-01
The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.
Integrating Xgrid into the HENP distributed computing model
NASA Astrophysics Data System (ADS)
Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.
2008-07-01
Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.
Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.
Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav
2012-01-01
Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.
30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...
30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...
30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...
Autonomous space processor for orbital debris
NASA Technical Reports Server (NTRS)
Ramohalli, Kumar; Campbell, David; Brockman, Jeff P.; Carter, Bruce; Donelson, Leslie; John, Lawrence E.; Marine, Micky C.; Rodina, Dan D.
1989-01-01
This work continues to develop advanced designs toward the ultimate goal of a GETAWAY SPECIAL to demonstrate economical removal of orbital debris utilizing local resources in orbit. The fundamental technical feasibility was demonstrated last year through theoretical calculations, quantitative computer animation, a solar focal point cutter, a robotic arm design and a subscale model. During this reporting period, several improvements are made in the solar cutter, such as auto track capabilities, better quality reflectors and a more versatile framework. The major advance has been in the design, fabrication and working demonstration of a ROBOTIC ARM that has several degrees of freedom. The functions were specifically tailored for the orbital debris handling. These advances are discussed here. Also a small fraction of the resources were allocated towards research in flame augmentation in SCRAMJETS for the NASP. Here, the fundamental advance was the attainment of Mach numbers up to 0.6 in the flame zone and a vastly improved injection system; the current work is expected to achieve supersonic combustion in the laboratory and an advanced monitoring system.
Electronic and Optical Properties of Novel Phases of Silicon and Silicon-Based Derivatives
NASA Astrophysics Data System (ADS)
Ong, Chin Shen; Choi, Sangkook; Louie, Steven
2014-03-01
The vast majority of solar cells in the market today are made from crystalline silicon in the diamond-cubic phase. Nonetheless, diamond-cubic Si has an intrinsic disadvantage: it has an indirect band gap with a large energy difference between the direct gap and the indirect gap. In this work, we perform a careful study of the electronic and optical properties of a newly discovered cubic-Si20 phase of Si that is found to sport a direct band gap. In addition, other silicon-based derivatives have also been discovered and found to be thermodynamically metastable. We carry out ab initio GW and GW-BSE calculations for the quasiparticle excitations and optical spectra, respectively, of these new phases of silicon and silicon-based derivatives. This work was supported by NSF grant No. DMR10-1006184 and U.S. DOE under Contract No. DE-AC02-05CH11231. Computational resources have been provided by DOE at Lawrence Berkeley National Laboratory's NERSC facility and the NSF through XSEDE resources at NICS.
Managing resource capacity using hybrid simulation
NASA Astrophysics Data System (ADS)
Ahmad, Norazura; Ghani, Noraida Abdul; Kamil, Anton Abdulbasah; Tahar, Razman Mat
2014-12-01
Due to the diversity of patient flows and interdependency of the emergency department (ED) with other units in hospital, the use of analytical models seems not practical for ED modeling. One effective approach to study the dynamic complexity of ED problems is by developing a computer simulation model that could be used to understand the structure and behavior of the system. Attempts to build a holistic model using DES only will be too complex while if only using SD will lack the detailed characteristics of the system. This paper discusses the combination of DES and SD in order to get a better representation of the actual system than using either modeling paradigm solely. The model is developed using AnyLogic software that will enable us to study patient flows and the complex interactions among hospital resources for ED operations. Results from the model show that patients' length of stay is influenced by laboratories turnaround time, bed occupancy rate and ward admission rate.
Nkengasong, John N; Mesele, Tsehaynesh; Orloff, Sherry; Kebede, Yenew; Fonjungo, Peter N; Timperi, Ralph; Birx, Deborah
2009-06-01
Medical laboratory services are an essential, yet often neglected, component of health systems in developing countries. Their central role in public health, disease control and surveillance, and patient management is often poorly recognized by governments and donors. However, medical laboratory services in developing countries can be strengthened by leveraging funding from other sources of HIV/AIDS prevention, care, surveillance, and treatment programs. Strengthening these services will require coordinated efforts by national governments and partners and can be achieved by establishing and implementing national laboratory strategic plans and policies that integrate laboratory systems to combat major infectious diseases. These plans should take into account policy, legal, and regulatory frameworks; the administrative and technical management structure of the laboratories; human resources and retention strategies; laboratory quality management systems; monitoring and evaluation systems; procurement and maintenance of equipment; and laboratory infrastructure enhancement. Several countries have developed or are in the process of developing their laboratory plans, and others, such as Ethiopia, have implemented and evaluated their plan.
Tools and Techniques for Measuring and Improving Grid Performance
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Frumkin, M.; Smith, W.; VanderWijngaart, R.; Wong, P.; Biegel, Bryan (Technical Monitor)
2001-01-01
This viewgraph presentation provides information on NASA's geographically dispersed computing resources, and the various methods by which the disparate technologies are integrated within a nationwide computational grid. Many large-scale science and engineering projects are accomplished through the interaction of people, heterogeneous computing resources, information systems and instruments at different locations. The overall goal is to facilitate the routine interactions of these resources to reduce the time spent in design cycles, particularly for NASA's mission critical projects. The IPG (Information Power Grid) seeks to implement NASA's diverse computing resources in a fashion similar to the way in which electric power is made available.
SaaS enabled admission control for MCMC simulation in cloud computing infrastructures
NASA Astrophysics Data System (ADS)
Vázquez-Poletti, J. L.; Moreno-Vozmediano, R.; Han, R.; Wang, W.; Llorente, I. M.
2017-02-01
Markov Chain Monte Carlo (MCMC) methods are widely used in the field of simulation and modelling of materials, producing applications that require a great amount of computational resources. Cloud computing represents a seamless source for these resources in the form of HPC. However, resource over-consumption can be an important drawback, specially if the cloud provision process is not appropriately optimized. In the present contribution we propose a two-level solution that, on one hand, takes advantage of approximate computing for reducing the resource demand and on the other, uses admission control policies for guaranteeing an optimal provision to running applications.
Elementary and Advanced Computer Projects for the Physics Classroom and Laboratory
1992-12-01
are SPF/PC, MS Word, n3, Symphony, Mathematics, and FORTRAN. The authors’ programs assist data analysis in particular laboratory experiments and make...assist data analysis in particular laboratory experiments and make use of the Monte Carlo and other numerical techniques in computer simulation and...the language of science and engineering in industry and government laboratories (alth..4h C is becoming a powerful competitor ). RM/FORTRAN (cost $400
Hathaway, R.M.; McNellis, J.M.
1989-01-01
Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously. The new approaches and expanded use of computers will require substantial increases in the quantity and sophistication of the Division 's computer resources. The requirements presented in this report will be used to develop technical specifications that describe the computer resources needed during the 1990's. (USGS)
Setting Up a Grid-CERT: Experiences of an Academic CSIRT
ERIC Educational Resources Information Center
Moller, Klaus
2007-01-01
Purpose: Grid computing has often been heralded as the next logical step after the worldwide web. Users of grids can access dynamic resources such as computer storage and use the computing resources of computers under the umbrella of a virtual organisation. Although grid computing is often compared to the worldwide web, it is vastly more complex…
A Paperless Lab Manual - Lessons Learned
NASA Astrophysics Data System (ADS)
Hatten, Daniel L.; Hatten, Maggie W.
1999-10-01
Every freshman entering Rose-Hulman Institute of Technology is equipped with a laptop computer and a software package that allow classroom and laboratory instructors the freedom to make computer-based assignments, publish course materials in electronic form, etc. All introductory physics laboratories and many of our classrooms are networked, and students routinely take their laptop computers to class/lab. The introductory physics laboratory manual was converted to HTML in the summer of 1997 and was made available to students over the Internet vice printing a paper manual during the 1998-99 school year. The aim was to reduce paper costs and allow timely updates of the laboratory experiments. A poll conducted at the end of the school year showed a generally positive student response to the online laboratory manual, with some reservations.
Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua
2014-01-01
This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353
Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua
2014-01-01
This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.
Resource Letter SPE-1: Single-Photon Experiments in the Undergraduate Laboratory
NASA Astrophysics Data System (ADS)
Galvez, Enrique J.
2014-11-01
This Resource Letter lists undergraduate-laboratory adaptations of landmark optical experiments on the fundamentals of quantum physics. Journal articles and websites give technical details of the adaptations, which offer students unique hands-on access to testing fundamental concepts and predictions of quantum mechanics. A selection of the original research articles that led to the implementations is included. These developments have motivated a rethinking of the way quantum mechanics is taught, so this Resource Letter also lists textbooks that provide these new approaches.
Fujiki, Akiko; Kato, Seiya
2008-06-01
The international training course on TB laboratory work for national tuberculosis program (NTP) has been conducted at the Research Institute of Tuberculosis since 1975 funded by Japan International Cooperation Agency in collaboration with WHO Western Pacific Regional Office. The aim of the course is to train key personnel in TB laboratory field for NTP in resource-limited countries. The course has trained 265 national key personnel in TB laboratory service from 57 resource-limited countries in the last 33 years. The number of participants trained may sound too small in the fight against the large TB problem in resource-limited countries. However, every participant is playing an important role as a core and catalyst for the TB control program in his/her own country when they were back home. The curriculum is composed of technical aspects on TB examination, mainly sputum microscopy in addition since microscopy service is provided at many centers that are deployed in a widely spread area, the managerial aspect of maintaining quality TB laboratory work at the field laboratory is another component of the curriculum. Effective teaching methods using materials such as artificial sputum, which is useful for panel slide preparation, and technical manuals with illustrations and pictures of training procedure have been developed through the experience of the course. These manuals are highly appreciated and widely used by the front line TB workers. The course has also contributed to the expansion of EQA (External Quality Assessment) system on AFB microscopy for the improvement of the quality of TB laboratory service of NTP. The course is well-known for not only having a long history, but also for its unique learning method emphasizing "Participatory Training", particularly for practicum sessions to master the skills on AFB microscopy. The method in learning AFB microscopy, which was developed by the course, was published as a training manual by IUATLD, RIT and USAID. As it is mentioned, the course has been contributing to human resource capacity building including management of laboratory service to improve NTP in the resource-limited countries. Currently, expansion of technology transfer on culture examination for drug susceptibility test has been attempted to the resource-limited countries due to the occurrence of MDR-TB (Multi drug-resistant tuberculosis) and XDR-TB (Extensively drug-resistant tuberculosis) cases. However, since sputum smear examination is most effective method of detection of infectious TB, the writers believe it is still a core component of TB control, unless a new diagnostic tool that is practicable and effective in the resource-limited countries is developed. Therefore the course will keep focused on the smear examination as the basic curriculum. The course is highly appreciated by international experts and it is our responsibility to answer the expectation from them.
A cost-effective approach to establishing a surgical skills laboratory.
Berg, David A; Milner, Richard E; Fisher, Carol A; Goldberg, Amy J; Dempsey, Daniel T; Grewal, Harsh
2007-11-01
Recent studies comparing inexpensive low-fidelity box trainers to expensive computer-based virtual reality systems demonstrate similar acquisition of surgical skills and transferability to the clinical setting. With new mandates emerging that all surgical residency programs have access to a surgical skills laboratory, we describe our cost-effective approach to teaching basic and advanced open and laparoscopic skills utilizing inexpensive bench models, box trainers, and animate models. Open models (basic skills, bowel anastomosis, vascular anastomosis, trauma skills) and laparoscopic models (basic skills, cholecystectomy, Nissen fundoplication, suturing and knot tying, advanced in vivo skills) are constructed using a combination of materials found in our surgical research laboratories, retail stores, or donated by industry. Expired surgical materials are obtained from our hospital operating room and animal organs from food-processing plants. In vivo models are performed in an approved research facility. Operation, maintenance, and administration of the surgical skills laboratory are coordinated by a salaried manager, and instruction is the responsibility of all surgical faculty from our institution. Overall, the cost analyses of our initial startup costs and operational expenditures over a 3-year period revealed a progressive decrease in yearly cost per resident (2002-2003, $1,151; 2003-2004, $1,049; and 2004-2005, $982). Our approach to surgical skills education can serve as a template for any surgery program with limited financial resources.
Segmentation-less Digital Rock Physics
NASA Astrophysics Data System (ADS)
Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.
2017-12-01
In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.
Laboratory for Energy-Related Health Research annual report, fiscal year 1986
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abell, D.L.
1989-02-01
This report to the US Department of Energy summarizes research activities for the period from 1 October 1985--30 September 1986 at the Laboratory for Energy-related Health Research (LEHR) which is operated by the University of California, Davis. The laboratory's research objective is to provide new knowledge for an improved understanding of the potential bioenvironmental and occupational health problems associated with energy utilization to contribute to the safe and healthful development of energy resources for the benefit of mankind. This research encompasses several areas of basic investigation that relate to toxicological and biomedical problems associated with potentially toxic chemical and radioactivemore » substances and ionizing radiation, with particular emphasis on carcinogenicity. Studies of systemic injury and nuclear medical diagnostic and therapeutic methods are also involved. This is an interdisciplinary program spanning physics, chemistry, environmental engineering, biophysics and biochemistry, cellular and molecular biology, physiology, immunology, toxicology, both human and veterinary medicine, nuclear medicine, pathology, hematology, radiation biology, reproductive biology, oncology, biomathematics, and computer science. The principal themes of the research at LEHR center around the biology, radiobiology, and health status of the skeleton and its blood-forming constituents; the toxicology and properties of airborne materials; the beagle as an experimental animal model; carcinogenesis; and the scaling of the results from laboratory animal studies to man for appropriate assessment of risk.« less
Networking Micro-Processors for Effective Computer Utilization in Nursing
Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia
1982-01-01
Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in the development of computer resources.
Managing Laboratory Data Using Cloud Computing as an Organizational Tool
ERIC Educational Resources Information Center
Bennett, Jacqueline; Pence, Harry E.
2011-01-01
One of the most significant difficulties encountered when directing undergraduate research and developing new laboratory experiments is how to efficiently manage the data generated by a number of students. Cloud computing, where both software and computer files reside online, offers a solution to this data-management problem and allows researchers…
Virtual Computing Laboratories: A Case Study with Comparisons to Physical Computing Laboratories
ERIC Educational Resources Information Center
Burd, Stephen D.; Seazzu, Alessandro F.; Conway, Christopher
2009-01-01
Current technology enables schools to provide remote or virtual computing labs that can be implemented in multiple ways ranging from remote access to banks of dedicated workstations to sophisticated access to large-scale servers hosting virtualized workstations. This paper reports on the implementation of a specific lab using remote access to…
ERIC Educational Resources Information Center
Pritchard, Benjamin P.; Simpson, Scott; Zurek, Eva; Autschbach, Jochen
2014-01-01
A computational experiment investigating the [superscript 1]H and [superscript 13]C nuclear magnetic resonance (NMR) chemical shifts of molecules with unpaired electrons has been developed and implemented. This experiment is appropriate for an upper-level undergraduate laboratory course in computational, physical, or inorganic chemistry. The…
Integration of Computer Technology Into an Introductory-Level Neuroscience Laboratory
ERIC Educational Resources Information Center
Evert, Denise L.; Goodwin, Gregory; Stavnezer, Amy Jo
2005-01-01
We describe 3 computer-based neuroscience laboratories. In the first 2 labs, we used commercially available interactive software to enhance the study of functional and comparative neuroanatomy and neurophysiology. In the remaining lab, we used customized software and hardware in 2 psychophysiological experiments. With the use of the computer-based…
Discovery & Interaction in Astro 101 Laboratory Experiments
NASA Astrophysics Data System (ADS)
Maloney, Frank Patrick; Maurone, Philip; DeWarf, Laurence E.
2016-01-01
The availability of low-cost, high-performance computing hardware and software has transformed the manner by which astronomical concepts can be re-discovered and explored in a laboratory that accompanies an astronomy course for arts students. We report on a strategy, begun in 1992, for allowing each student to understand fundamental scientific principles by interactively confronting astronomical and physical phenomena, through direct observation and by computer simulation. These experiments have evolved as :a) the quality and speed of the hardware has greatly increasedb) the corresponding hardware costs have decreasedc) the students have become computer and Internet literated) the importance of computationally and scientifically literate arts graduates in the workplace has increased.We present the current suite of laboratory experiments, and describe the nature, procedures, and goals in this two-semester laboratory for liberal arts majors at the Astro 101 university level.
NASA Astrophysics Data System (ADS)
Anderson, Delia Marie Castro
Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in the experimental group, who responded to the use of Internet Resources Survey, were positive (mean of 3.4 on the 4-point scale) toward their use of Internet resources which included the online courseware developed by the researcher. Findings from this study suggest that (1) the digital divide with respect to gender and ethnicity may be narrowing, and (2) students who are exposed to a course that augments computer-driven courseware with traditional teaching methods appear to have less anxiety, have a clearer perception of computer usefulness, and feel that online resources enhance their learning.
Simulated Laboratory in Digital Logic.
ERIC Educational Resources Information Center
Cleaver, Thomas G.
Design of computer circuits used to be a pencil and paper task followed by laboratory tests, but logic circuit design can now be done in half the time as the engineer accesses a program which simulates the behavior of real digital circuits, and does all the wiring and testing on his computer screen. A simulated laboratory in digital logic has been…
Rationale for cost-effective laboratory medicine.
Robinson, A
1994-01-01
There is virtually universal consensus that the health care system in the United States is too expensive and that costs need to be limited. Similar to health care costs in general, clinical laboratory expenditures have increased rapidly as a result of increased utilization and inflationary trends within the national economy. Economic constraints require that a compromise be reached between individual welfare and limited societal resources. Public pressure and changing health care needs have precipitated both subtle and radical laboratory changes to more effectively use allocated resources. Responsibility for excessive laboratory use can be assigned primarily to the following four groups: practicing physicians, physicians in training, patients, and the clinical laboratory. The strategies to contain escalating health care costs have ranged from individualized physician education programs to government intervention. Laboratories have responded to the fiscal restraints imposed by prospective payment systems by attempting to reduce operational costs without adversely impacting quality. Although cost containment directed at misutilization and overutilization of existing services has conserved resources, to date, an effective cost control mechanism has yet to be identified and successfully implemented on a grand enough scale to significantly impact health care expenditures in the United States. PMID:8055467
Computational On-Chip Imaging of Nanoparticles and Biomolecules using Ultraviolet Light.
Daloglu, Mustafa Ugur; Ray, Aniruddha; Gorocs, Zoltan; Xiong, Matthew; Malik, Ravinder; Bitan, Gal; McLeod, Euan; Ozcan, Aydogan
2017-03-09
Significant progress in characterization of nanoparticles and biomolecules was enabled by the development of advanced imaging equipment with extreme spatial-resolution and sensitivity. To perform some of these analyses outside of well-resourced laboratories, it is necessary to create robust and cost-effective alternatives to existing high-end laboratory-bound imaging and sensing equipment. Towards this aim, we have designed a holographic on-chip microscope operating at an ultraviolet illumination wavelength (UV) of 266 nm. The increased forward scattering from nanoscale objects at this short wavelength has enabled us to detect individual sub-30 nm nanoparticles over a large field-of-view of >16 mm 2 using an on-chip imaging platform, where the sample is placed at ≤0.5 mm away from the active area of an opto-electronic sensor-array, without any lenses in between. The strong absorption of this UV wavelength by biomolecules including nucleic acids and proteins has further enabled high-contrast imaging of nanoscopic aggregates of biomolecules, e.g., of enzyme Cu/Zn-superoxide dismutase, abnormal aggregation of which is linked to amyotrophic lateral sclerosis (ALS) - a fatal neurodegenerative disease. This UV-based wide-field computational imaging platform could be valuable for numerous applications in biomedical sciences and environmental monitoring, including disease diagnostics, viral load measurements as well as air- and water-quality assessment.
An electronic laboratory notebook based on HTML forms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marstaller, J.E.; Zorn, M.D.
The electronic notebook records information that has traditionally been kept in handwritten laboratory notebooks. It keeps detailed information about the progress of the research , such as the optimization of primers, the screening of the primers and, finally, the mapping of the probes. The notebook provides two areas of services: Data entry, and reviewing of data in all stages. The World wide Web browsers, with HTML based forms provide a fast and easy mechanism to create forms-based user interfaces. The computer scientist can sit down with the biologist and rapidly make changes in response to the user`s comments. Furthermore themore » HTML forms work equally well on a number of different hardware platforms; thus the biologists may continue using their Macintosh computers and find a familiar interface if they have to work on a Unix workstation. The web browser can be run from any machine connected to the Internet: thus the users are free to enter or view information even away from their labs at home or while on travel. Access can be restricted by password and other means to secure the confidentiality of the data. A bonus that is hard to implement otherwise is the facile connection to outside resources. Linking local information to data in public databases is only a hypertext link away with little or no additional programming efforts.« less
Computational On-Chip Imaging of Nanoparticles and Biomolecules using Ultraviolet Light
NASA Astrophysics Data System (ADS)
Daloglu, Mustafa Ugur; Ray, Aniruddha; Gorocs, Zoltan; Xiong, Matthew; Malik, Ravinder; Bitan, Gal; McLeod, Euan; Ozcan, Aydogan
2017-03-01
Significant progress in characterization of nanoparticles and biomolecules was enabled by the development of advanced imaging equipment with extreme spatial-resolution and sensitivity. To perform some of these analyses outside of well-resourced laboratories, it is necessary to create robust and cost-effective alternatives to existing high-end laboratory-bound imaging and sensing equipment. Towards this aim, we have designed a holographic on-chip microscope operating at an ultraviolet illumination wavelength (UV) of 266 nm. The increased forward scattering from nanoscale objects at this short wavelength has enabled us to detect individual sub-30 nm nanoparticles over a large field-of-view of >16 mm2 using an on-chip imaging platform, where the sample is placed at ≤0.5 mm away from the active area of an opto-electronic sensor-array, without any lenses in between. The strong absorption of this UV wavelength by biomolecules including nucleic acids and proteins has further enabled high-contrast imaging of nanoscopic aggregates of biomolecules, e.g., of enzyme Cu/Zn-superoxide dismutase, abnormal aggregation of which is linked to amyotrophic lateral sclerosis (ALS) - a fatal neurodegenerative disease. This UV-based wide-field computational imaging platform could be valuable for numerous applications in biomedical sciences and environmental monitoring, including disease diagnostics, viral load measurements as well as air- and water-quality assessment.
Desktop Computing Integration Project
NASA Technical Reports Server (NTRS)
Tureman, Robert L., Jr.
1992-01-01
The Desktop Computing Integration Project for the Human Resources Management Division (HRMD) of LaRC was designed to help division personnel use personal computing resources to perform job tasks. The three goals of the project were to involve HRMD personnel in desktop computing, link mainframe data to desktop capabilities, and to estimate training needs for the division. The project resulted in increased usage of personal computers by Awards specialists, an increased awareness of LaRC resources to help perform tasks, and personal computer output that was used in presentation of information to center personnel. In addition, the necessary skills for HRMD personal computer users were identified. The Awards Office was chosen for the project because of the consistency of their data requests and the desire of employees in that area to use the personal computer.
Childhood as a Resource and Laboratory for the Self-Project
ERIC Educational Resources Information Center
Buhler-Niederberger, Doris; Konig, Alexandra
2011-01-01
The biographies of individuals in today's societies are characterized by the need to exert effort and make decisions in planning one's life course. A "self-project" has to be worked out both retrospectively and prospectively; childhood becomes important as a resource and a laboratory for the self-project. This empirical study analyses how the…
Teacher's Resource Guide on Acidic Precipitation with Laboratory Activities.
ERIC Educational Resources Information Center
Barrow, Lloyd H.
The purpose of this teacher's resource guide is to help science teachers incorporate the topic of acidic precipitation into their curricula. A survey of recent junior high school science textbooks found a maximum of one paragraph devoted to the subject; in addition, none of these books had any related laboratory activities. It was on the basis of…