Sample records for computing scidac program

  1. Preface: SciDAC 2009

    NASA Astrophysics Data System (ADS)

    Simon, Horst

    2009-07-01

    By almost any measure, the SciDAC community has come a long way since DOE launched the SciDAC program back in 2001. At the time, we were grappling with how to efficiently run applications on terascale systems (the November 2001 TOP500 list was led by DOE's ASCI White IBM system at Lawrence Livermore achieving 7.2 teraflop/s). And the results stemming from the first round of SciDAC projects were summed up in two-page reports. The scientific results were presented at annual meetings, which were by invitation only and typically were attended by about 75 researchers. Fast forward to 2009 and we now have SciDAC Review, a quarterly magazine showcasing the scientific computing contributions of SciDAC projects and related programs, all focused on presenting a comprehensive look at Scientific Discovery through Advanced Computing. That is also the motivation behind the annual SciDAC conference that in 2009 was held from June 14-18 in San Diego. The annual conference, which can also be described as a celebration of all things SciDAC, grew out those meetings organized in the early days of the program. In 2005, the meeting was held in San Francisco and attendance was opened up to all members of the SciDAC community. The schedule was also expanded to include a keynote address, plenary speakers and other features found in a conference format. This year marks the fifth such SciDAC conference, which now comprises four days of computational science presentations, multiple poster sessions and, since last year, an evening event showcasing simulations and modeling runs resulting from SciDAC projects. The fifth annual SciDAC conference was remarkable on several levels. The primary purpose, of course, is to showcase the research accomplishments resulting from SciDAC programs in particular and computational science in general. It is these accomplishments, represented in 38 papers and 52 posters, that comprise this set of conference proceedings. These proceedings can stand alone as evidence of the success of DOE's innovative SciDAC efforts. But from the outset, a critical driver for the program was to foster increased collaboration among researchers across disciplines and organizations. In particular, SciDAC wanted to engage scientists at universities in the projects, both to expand the community and to develop the next generation of computational scientists. At the meeting in San Diego, the fruits of this emphasis were clearly visible, from the special poster session highlighting the work of the DOE Computational Science Graduate Fellows, to the informal discussions in hotel hallways, to focused side meetings apart from the main presentations. A highlight of the meeting was the keynote address by Dr Ray Orbach, until recently the DOE Under Secretary for Science and head of the Office of Science. It was during his tenure that the first round of projects matured and the second set of SciDAC projects were launched. And complementing these research projects was Dr Orbach's vision for INCITE, DOE's Innovative and Novel Computational Impact on Theory and Experiment program, inaugurated in 2003. This program allocated significant HPC resources to scientists tackling high-impact problems, including some of those addressed by SciDAC teams. Together, SciDAC and INCITE are dramatically accelerating the field of computational science. As has been noted before, the SciDAC conference celebrates progress in advancing science through large-scale modeling and simulation. Over 400 people registered to attend this year's talks, poster sessions and tutorials, all spanning the disciplines supported by DOE. While the principal focus was on SciDAC accomplishments, this year's conference also included invited presentations and posters from colleagues whose research is supported by other agencies. At the 2009 meeting we also formalized a developing synergy with the Department of Defense's HPC Users Group Meeting, which has occasionally met in parallel with the SciDAC meeting. But in San Diego, we took the additional steps of organizing a joint poster session and a joint plenary session, further advancing opportunities for broader networking. Throughout the four-day program, attendees at both meetings had the option of sitting in on sessions at either conference. We also included several of the NSF Petascale applications in the program, and have also extended invitations to our computational colleagues in other federal agencies, including the National Science Foundation, NASA, and the National Oceanographic and Atmospheric Administration, as well as international collaborators to join us in San Diego. In 2009 we also reprised one of the more popular sessions from Seattle in 2008, the Electronic Visualization and Poster Night, during which 29 scientific visualizations were presented on high-resolution large-format displays. The best entries were awarded one of the coveted 'OASCR Awards.' The conference also featured a session about breakthroughs in computational science, based on the 'Breakthrough Report' that was published in 2008, led by Tony Mezzacappa (ORNL). Tony was also the chair of the SciDAC 2005 conference. For the third consecutive year, the conference was followed by a day of tutorials organized by the SciDAC Outreach Center and aimed primarily at students interested in scientific computing. This year, nearly 100 participants attended the tutorials, hosted by the San Diego Supercomputer Center and General Atomics. This outreach to the broader community is really what SciDAC is all about - Scientific Discovery through Advanced Computing. Such discoveries are not confined by organizational lines, but rather are often the result of researchers reaching out and collaborating with others, using their combined expertise to push our boundaries of knowledge. I am happy to see that this vision is shared by so many researchers in computational science, who all decided to join SciDAC 2009. While credit for the excellent presentations and posters goes to the teams of researchers, the success of this year's conference is due to the strong efforts and support from members of the 2009 SciDAC Program Committee and Organizing Committee, and I would like to extend my heartfelt thanks to them for helping to make the 2009 meeting the largest and most successful to date. Program Committee members were: David Bader, LLNL; Pete Beckman, ANL; John Bell, LBNL; John Boisseau, University of Texas; Paul Bonoli, MIT; Hank Childs, LBNL; Bill Collins, LBNL; Jim Davenport, BNL; David Dean, ORNL; Thom Dunning, NCSA; Peg Folta, LLNL; Glenn Hammond, PNNL; Maciej Haranczyk, LBNL; Robert Harrison, ORNL; Paul Hovland, ANL; Paul Kent, ORNL; Aram Kevorkian, SPAWAR; David Keyes, Columbia University; Kwok Ko, SLAC; Felice Lightstone, LLNL; Bob Lucas, ISI/USC; Paul Mackenzie, Fermilab; Tony Mezzacappa, ORNL; John Negele, MIT; Jeff Nichols, ORNL; Mike Norman, UCSD; Joe Oefelein, SNL; Jeanie Osburn, NRL; Peter Ostroumov, ANL; Valerio Pascucci, University of Utah; Ruth Pordes, Fermilab; Rob Ross, ANL; Nagiza Samatova, ORNL; Martin Savage, University of Washington; Tim Scheibe, PNNL; Ed Seidel, NSF; Arie Shoshani, LBNL; Rick Stevens, ANL; Bob Sugar, UCSB; Bill Tang, PPPL; Bob Wilhelmson, NCSA; Kathy Yelick, NERSC/LBNL; Dave Zachmann, Vista Computational Technology LLC. Organizing Committee members were: Communications: Jon Bashor, LBNL. Contracts/Logistics: Mary Spada and Cheryl Zidel, ANL. Posters: David Bailey, LBNL. Proceedings: John Hules, LBNL. Proceedings Database Developer: Beth Cerny Patino, ANL. Program Committee Liaison/Conference Web Site: Yeen Mankin, LBNL. Tutorials: David Skinner, NERSC/LBNL. Visualization Night: Hank Childs, LBNL; Valerio Pascucci, Chems Touati, Nathan Galli, and Erik Jorgensen, University of Utah. Again, my thanks to all. Horst Simon San Diego, California June 18, 2009

  2. Preface: SciDAC 2007

    NASA Astrophysics Data System (ADS)

    Keyes, David E.

    2007-09-01

    It takes a village to perform a petascale computation—domain scientists, applied mathematicians, computer scientists, computer system vendors, program managers, and support staff—and the village was assembled during 24-28 June 2007 in Boston's Westin Copley Place for the third annual Scientific Discovery through Advanced Computing (SciDAC) 2007 Conference. Over 300 registered participants networked around 76 posters, focused on achievements and challenges in 36 plenary talks, and brainstormed in two panels. In addition, with an eye to spreading the vision for simulation at the petascale and to growing the workforce, 115 participants—mostly doctoral students and post-docs complementary to the conferees—were gathered on 29 June 2007 in classrooms of the Massachusetts Institute of Technology for a full day of tutorials on the use of SciDAC software. Eleven SciDAC-sponsored research groups presented their software at an introductory level, in both lecture and hands-on formats that included live runs on a local BlueGene/L. Computation has always been about garnering insight into the behavior of systems too complex to explore satisfactorily by theoretical means alone. Today, however, computation is about much more: scientists and decision makers expect quantitatively reliable predictions from simulations ranging in scale from that of the Earth's climate, down to quarks, and out to colliding black holes. Predictive simulation lies at the heart of policy choices in energy and environment affecting billions of lives and expenditures of trillions of dollars. It is also at the heart of scientific debates on the nature of matter and the origin of the universe. The petascale is barely adequate for such demands and we are barely established at the levels of resolution and throughput that this new scale of computation affords. However, no scientific agenda worldwide is pushing the petascale frontier on all its fronts as vigorously as SciDAC. The breadth of this conference archive reflects the philosophy of the SciDAC program, which was introduced as a collaboration of all of the program offices in the Office of Science of the U.S. Department of Energy (DOE) in Fall 2001 and was renewed for a second period of five years in Fall 2006, with additional support in certain areas from the DOE's National Nuclear Security Administration (NNSA) and the U.S. National Science Foundation (NSF). All of the projects in the SciDAC portfolio were represented at the conference and most are captured in this volume. In addition, the Organizing Committee incorporated into the technical program a number of computational science highlights from outside of SciDAC, and, indeed, from outside of the United States. As implied by the title, scientific discovery is the driving deliverable of the SciDAC program, spanning the full range of the DOE Office of Science: accelerator design, astrophysics, chemistry and materials science, climate science, combustion, life science, nuclear physics, plasma physics, and subsurface physics. As articulated in the eponymous report that launched SciDAC, the computational challenges of these diverse areas are remarkably common. Each is profoundly multiscale in space and time and therefore continues to benefit at any margin from access to the largest and fastest computers available. Optimality of representation and execution requires adaptive, scalable mathematical algorithms in both continuous (geometrically complex domain) and discrete (mesh and graph) aspects. Programmability and performance optimality require software environments that both manage the intricate details of the underlying hardware and abstract them for scientific users. Running effectively on remote specialized hardware requires transparent workflow systems. Comprehending the petascale data sets generated in such simulations requires automated tools for data exploration and visualization. Archiving and sharing access to this data within the inevitably distributed community of leading scientists requires networked collaborative environments. Each of these elements is a research and development project in its own right. SciDAC does not replace theoretical programs oriented towards long-term basic research, but harvests them for contemporary, complementary state-of-the-art computational campaigns. By clustering researchers from applications and enabling technologies into coordinated, mission-driven projects, SciDAC accomplishes two ends with remarkable effectiveness: (1) it enriches the scientific perspective of both applications and enabling communities through mutual interaction and (2) it leverages between applications solutions and effort encapsulated in software. Though SciDAC is unique, its objective of multiscale science at extreme computational scale is shared and approached through different programmatic mechanisms, notably NNSA's ASC program, NSF's Cyberinfrastructure program, and DoD's CREATE program in the U.S., and RIKEN's computational simulation programs in Japan. Representatives of each of these programs were given the podium at SciDAC 2007 and communication occurred that will be valuable towards the ends of complementarity, leverage, and promulgation of best practices. The 2007 conference was graced with additional welcome program announcements. Michael Strayer announced a new program of postdoctoral research fellowships in the enabling technologies. (The computer science post-docs will be named after the late Professor Ken Kennedy, who briefly led the SciDAC project Center for Scalable Application Development Software (CScADS) until his untimely death in February 2007.) IBM announced its petascale BlueGene/P system on June 26. Meanwhile, at ISC07 in Dresden, the semi-annual posting of a revised Top 500 list on June 27 showed several new Top 10 systems accessible to various SciDAC participants. While SciDAC is dominated in 2007 by the classical scientific pursuit of understanding through reduction to components and isolation of causes and effects, simulation at scale is beginning to offer something even more tantalizing: synthesis and integration of multiple interacting phenomena in complex systems. Indeed, the design-oriented elements of SciDAC, such as accelerator and tokamak modeling, area already emphasizing multiphysics coupling, and climate science has been doing so for years in the coupling of models of the ocean, atmosphere, ice, and land. In one of the panels at SciDAC 2007, leaders of a three-stage `progressive workshop' on exascale simulation for energy and environment (E3), considered prospects for whole-system modeling in a variety of scientific areas within the domain of DOE related to energy, environmental, and global security. Computer vendors were invited to comment on the prospects for delivering exascale computing systems in another panel. The daunting nature of this challenge is summarized with the observation that the peak processing power of the entire Top 500 list of June 2007 is only 0.0052 exaflop/s. It takes the combined power of most of the computers on the internet today worldwide to reach 1 exaflop/s or 1018 floating point operations per second. The program of SciDAC 2007 followed a template honed by its predecessor meetings in San Francisco in 2005 and Denver in 2006. The Boston venue permitted outreach to a number of universities in the immediate region and throughout southern New England, including SciDAC campuses of Boston University, Harvard, and MIT, and a dozen others including most of the Ivy League. Altogether 55 universities, 20 laboratories, 14 private companies, 5 agencies, and 4 countries were represented among the conference and tutorial workshop participants. Approximately 47% of the conference participants were from government laboratories, 37% from universities, 9% from federal program offices, and 7% from industry. Keys to the success of SciDAC 2007 were the informal poster receptions, coffee breaks, working breakfasts and lunches, and even the `Right-brain Night' featuring artistic statements, both reverent and irreverent, by computational scientists, inspired by their work. The organizers thank the sponsors for their generosity in attracting participants to these informal occasions with sumptuous snacks and beverages: AMD, Cray, DataDirect, IBM, SGI, SiCortex, and the Institute of Physics. A conference as logistically complex as SciDAC 2007 cannot possibly and should not be executed primarily by the scientists, themselves. It is a great pleasure to acknowledge the many talented staff that contributed to a productive time for all participants and nearperfect adherence to schedule. Chief among them is Betsy Riley, currently detailed from ORNL to the program office in Germantown, with degrees in mathematics and computer science, but a passion for organizing interdisciplinary scientific programs. Betsy staffed the organizing committee during the year of telecon meetings leading up to the conference and masterminded sponsorship, invitations, and the compilation of the proceedings. Assisting her from ORNL in managing the program were Daniel Pack, Angela Beach, and Angela Fincher. Cynthia Latham of ORNL performed admirably in website and graphic design for all aspects of the online and printed materials of the meeting. John Bui, John Smith, and Missy Smith of ORNL ran their customary tight ship with respect to audio-visual execution and capture, assisted by Eric Ecklund and Keith Quinn of the Westin. Pamelia Nixon-Hartje of Ambassador Services was personally invaluable in getting the most out of the hotel and its staff. We thank Jeff Nichols of ORNL for managing the primary subcontract for the meeting. The SciDAC tutorial program was a joint effort of Professor John Negele of MIT, David Skinner, PI of the SciDAC Outreach Center, and the SciDAC 2007 Chair. Sponsorship from the Outreach Center in the form of travel scholarships for students, and of the local area SciDAC university delegation of BU, Harvard, and MIT for food and facilities is gratefully acknowledged. Of course, the archival success of a scientific meeting rests with the willingness of the presenters to make the extra effort to package their field-leading science in a form suitable for interaction with colleagues from other disciplines rather than fellow specialists. This goal, oft-stated in the run up to the meeting, was achieved to an admirable degree, both in the live presentations and in these proceedings. This effort is its own reward, since it leads to enhanced communication and accelerated scientific progress. Our greatest thanks are reserved for Michael Strayer, Associate Director for OASCR and the Director of SciDAC, for envisioning this celebratory meeting three years ago, and sustaining it with his own enthusiasm, in order to provide a highly visible manifestation of the fruits of SciDAC. He and the other Office of Science program managers in attendance and working in Washington, DC to communicate the opportunities afforded by SciDAC deserve the gratitude of a new virtual scientific village created and cemented under the vision of scientific discovery through advanced computing. David E Keyes Fu Foundation Professor of Applied Mathematics

  3. Preface: SciDAC 2006

    NASA Astrophysics Data System (ADS)

    Tang, William M., Dr.

    2006-01-01

    The second annual Scientific Discovery through Advanced Computing (SciDAC) Conference was held from June 25-29, 2006 at the new Hyatt Regency Hotel in Denver, Colorado. This conference showcased outstanding SciDAC-sponsored computational science results achieved during the past year across many scientific domains, with an emphasis on science at scale. Exciting computational science that has been accomplished outside of the SciDAC program both nationally and internationally was also featured to help foster communication between SciDAC computational scientists and those funded by other agencies. This was illustrated by many compelling examples of how domain scientists collaborated productively with applied mathematicians and computer scientists to effectively take advantage of terascale computers (capable of performing trillions of calculations per second) not only to accelerate progress in scientific discovery in a variety of fields but also to show great promise for being able to utilize the exciting petascale capabilities in the near future. The SciDAC program was originally conceived as an interdisciplinary computational science program based on the guiding principle that strong collaborative alliances between domain scientists, applied mathematicians, and computer scientists are vital to accelerated progress and associated discovery on the world's most challenging scientific problems. Associated verification and validation are essential in this successful program, which was funded by the US Department of Energy Office of Science (DOE OS) five years ago. As is made clear in many of the papers in these proceedings, SciDAC has fundamentally changed the way that computational science is now carried out in response to the exciting challenge of making the best use of the rapid progress in the emergence of more and more powerful computational platforms. In this regard, Dr. Raymond Orbach, Energy Undersecretary for Science at the DOE and Director of the OS has stated: `SciDAC has strengthened the role of high-end computing in furthering science. It is defining whole new fields for discovery.' (SciDAC Review, Spring 2006, p8). Application domains within the SciDAC 2006 conference agenda encompassed a broad range of science including: (i) the DOE core mission of energy research involving combustion studies relevant to fuel efficiency and pollution issues faced today and magnetic fusion investigations impacting prospects for future energy sources; (ii) fundamental explorations into the building blocks of matter, ranging from quantum chromodynamics - the basic theory that describes how quarks make up the protons and neutrons of all matter - to the design of modern high-energy accelerators; (iii) the formidable challenges of predicting and controlling the behavior of molecules in quantum chemistry and the complex biomolecules determining the evolution of biological systems; (iv) studies of exploding stars for insights into the nature of the universe; and (v) integrated climate modeling to enable realistic analysis of earth's changing climate. Associated research has made it quite clear that advanced computation is often the only means by which timely progress is feasible when dealing with these complex, multi-component physical, chemical, and biological systems operating over huge ranges of temporal and spatial scales. Working with the domain scientists, applied mathematicians and computer scientists have continued to develop the discretizations of the underlying equations and the complementary algorithms to enable improvements in solutions on modern parallel computing platforms as they evolve from the terascale toward the petascale regime. Moreover, the associated tremendous growth of data generated from the terabyte to the petabyte range demands not only the advanced data analysis and visualization methods to harvest the scientific information but also the development of efficient workflow strategies which can deal with the data input/output, management, movement, and storage challenges. If scientific discovery is expected to keep apace with the continuing progression from tera- to petascale platforms, the vital alliance between domain scientists, applied mathematicians, and computer scientists will be even more crucial. During the SciDAC 2006 Conference, some of the future challenges and opportunities in interdisciplinary computational science were emphasized in the Advanced Architectures Panel and by Dr. Victor Reis, Senior Advisor to the Secretary of Energy, who gave a featured presentation on `Simulation, Computation, and the Global Nuclear Energy Partnership.' Overall, the conference provided an excellent opportunity to highlight the rising importance of computational science in the scientific enterprise and to motivate future investment in this area. As Michael Strayer, SciDAC Program Director, has noted: `While SciDAC may have started out as a specific program, Scientific Discovery through Advanced Computing has become a powerful concept for addressing some of the biggest challenges facing our nation and our world.' Looking forward to next year, the SciDAC 2007 Conference will be held from June 24-28 at the Westin Copley Plaza in Boston, Massachusetts. Chairman: David Keyes, Columbia University. The Organizing Committee for the SciDAC 2006 Conference would like to acknowledge the individuals whose talents and efforts were essential to the success of the meeting. Special thanks go to Betsy Riley for her leadership in building the infrastructure support for the conference, for identifying and then obtaining contributions from our corporate sponsors, for coordinating all media communications, and for her efforts in organizing and preparing the conference proceedings for publication; to Tim Jones for handling the hotel scouting, subcontracts, and exhibits and stage production; to Angela Harris for handling supplies, shipping, and tracking, poster sessions set-up, and for her efforts in coordinating and scheduling the promotional activities that took place during the conference; to John Bui and John Smith for their superb wireless networking and A/V set-up and support; to Cindy Latham for Web site design, graphic design, and quality control of proceedings submissions; and to Pamelia Nixon-Hartje of Ambassador for budget and quality control of catering. We are grateful for the highly professional dedicated efforts of all of these individuals, who were the cornerstones of the SciDAC 2006 Conference. Thanks also go to Angela Beach of the ORNL Conference Center for her efforts in executing the contracts with the hotel, Carolyn James of Colorado State for on-site registration supervision, Lora Wolfe and Brittany Hagen for administrative support at ORNL, and Dami Rich and Andrew Sproles for graphic design and production. We are also most grateful to the Oak Ridge National Laboratory, especially Jeff Nichols, and to our corporate sponsors, Data Direct Networks, Cray, IBM, SGI, and Institute of Physics Publishing for their support. We especially express our gratitude to the featured speakers, invited oral speakers, invited poster presenters, session chairs, and advanced architecture panelists and chair for their excellent contributions on behalf of SciDAC 2006. We would like to express our deep appreciation to Lali Chatterjee, Graham Douglas, Margaret Smith, and the production team of Institute of Physics Publishing, who worked tirelessly to publish the final conference proceedings in a timely manner. Finally, heartfelt thanks are extended to Michael Strayer, Associate Director for OASCR and SciDAC Director, and to the DOE program managers associated with SciDAC for their continuing enthusiasm and strong support for the annual SciDAC Conferences as a special venue to showcase the exciting scientific discovery achievements enabled by the interdisciplinary collaborations championed by the SciDAC program.

  4. National Computational Infrastructure for Lattice Gauge Theory SciDAC-2 Closeout Report Indiana University Component

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gottlieb, Steven Arthur; DeTar, Carleton; Tousaint, Doug

    This is the closeout report for the Indiana University portion of the National Computational Infrastructure for Lattice Gauge Theory project supported by the United States Department of Energy under the SciDAC program. It includes information about activities at Indian University, the University of Arizona, and the University of Utah, as those three universities coordinated their activities.

  5. Final Report. Institute for Ultralscale Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu; Galli, Giulia; Gygi, Francois

    The SciDAC Institute for Ultrascale Visualization brought together leading experts from visualization, high-performance computing, and science application areas to make advanced visualization solutions for SciDAC scientists and the broader community. Over the five-year project, the Institute introduced many new enabling visualization techniques, which have significantly enhanced scientists’ ability to validate their simulations, interpret their data, and communicate with others about their work and findings. This Institute project involved a large number of junior and student researchers, who received the opportunities to work on some of the most challenging science applications and gain access to the most powerful high-performance computing facilitiesmore » in the world. They were readily trained and prepared for facing the greater challenges presented by extreme-scale computing. The Institute’s outreach efforts, through publications, workshops and tutorials, successfully disseminated the new knowledge and technologies to the SciDAC and the broader scientific communities. The scientific findings and experience of the Institute team helped plan the SciDAC3 program.« less

  6. Preface: SciDAC 2008

    NASA Astrophysics Data System (ADS)

    Stevens, Rick

    2008-07-01

    The fourth annual Scientific Discovery through Advanced Computing (SciDAC) Conference was held June 13-18, 2008, in Seattle, Washington. The SciDAC conference series is the premier communitywide venue for presentation of results from the DOE Office of Science's interdisciplinary computational science program. Started in 2001 and renewed in 2006, the DOE SciDAC program is the country's - and arguably the world's - most significant interdisciplinary research program supporting the development of advanced scientific computing methods and their application to fundamental and applied areas of science. SciDAC supports computational science across many disciplines, including astrophysics, biology, chemistry, fusion sciences, and nuclear physics. Moreover, the program actively encourages the creation of long-term partnerships among scientists focused on challenging problems and computer scientists and applied mathematicians developing the technology and tools needed to address those problems. The SciDAC program has played an increasingly important role in scientific research by allowing scientists to create more accurate models of complex processes, simulate problems once thought to be impossible, and analyze the growing amount of data generated by experiments. To help further the research community's ability to tap into the capabilities of current and future supercomputers, Under Secretary for Science, Raymond Orbach, launched the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program in 2003. The INCITE program was conceived specifically to seek out computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. The program encourages proposals from universities, other research institutions, and industry. During the first two years of the INCITE program, 10 percent of the resources at NERSC were allocated to INCITE awardees. However, demand for supercomputing resources far exceeded available systems; and in 2003, the Office of Science identified increasing computing capability by a factor of 100 as the second priority on its Facilities of the Future list. The goal was to establish leadership-class computing resources to support open science. As a result of a peer reviewed competition, the first leadership computing facility was established at Oak Ridge National Laboratory in 2004. A second leadership computing facility was established at Argonne National Laboratory in 2006. This expansion of computational resources led to a corresponding expansion of the INCITE program. In 2008, Argonne, Lawrence Berkeley, Oak Ridge, and Pacific Northwest national laboratories all provided resources for INCITE. By awarding large blocks of computer time on the DOE leadership computing facilities, the INCITE program enables the largest-scale computations to be pursued. In 2009, INCITE will award over half a billion node-hours of time. The SciDAC conference celebrates progress in advancing science through large-scale modeling and simulation. Over 350 participants attended this year's talks, poster sessions, and tutorials, spanning the disciplines supported by DOE. While the principal focus was on SciDAC accomplishments, this year's conference also included invited presentations and posters from DOE INCITE awardees. Another new feature in the SciDAC conference series was an electronic theater and video poster session, which provided an opportunity for the community to see over 50 scientific visualizations in a venue equipped with many high-resolution large-format displays. To highlight the growing international interest in petascale computing, this year's SciDAC conference included a keynote presentation by Herman Lederer from the Max Planck Institut, one of the leaders of DEISA (Distributed European Infrastructure for Supercomputing Applications) project and a member of the PRACE consortium, Europe's main petascale project. We also heard excellent talks from several European groups, including Laurent Gicquel of CERFACS, who spoke on `Large-Eddy Simulations of Turbulent Reacting Flows of Real Burners: Status and Challenges', and Jean-Francois Hamelin from EDF, who presented a talk on `Getting Ready for Petaflop Capacities and Beyond: A Utility Perspective'. Two other compelling addresses gave attendees a glimpse into the future. Tomas Diaz de la Rubia of Lawrence Livermore National Laboratory spoke on a vision for a fusion/fission hybrid reactor known as the `LIFE Engine' and discussed some of the materials and modeling challenges that need to be overcome to realize the vision for a 1000-year greenhouse-gas-free power source. Dan Reed from Microsoft gave a capstone talk on the convergence of technology, architecture, and infrastructure for cloud computing, data-intensive computing, and exascale computing (1018 flops/sec). High-performance computing is making rapid strides. The SciDAC community's computational resources are expanding dramatically. In the summer of 2008 the first general purpose petascale system (IBM Cell-based RoadRunner at Los Alamos National Laboratory) was recognized in the top 500 list of fastest machines heralding in the dawning of the petascale era. The DOE's leadership computing facility at Argonne reached number three on the Top 500 and is at the moment the most capable open science machine based on an IBM BG/P system with a peak performance of over 550 teraflops/sec. Later this year Oak Ridge is expected to deploy a 1 petaflops/sec Cray XT system. And even before the scientific community has had an opportunity to make significant use of petascale systems, the computer science research community is forging ahead with ideas and strategies for development of systems that may by the end of the next decade sustain exascale performance. Several talks addressed barriers to, and strategies for, achieving exascale capabilities. The last day of the conference was devoted to tutorials hosted by Microsoft Research at a new conference facility in Redmond, Washington. Over 90 people attended the tutorials, which covered topics ranging from an introduction to BG/P programming to advanced numerical libraries. The SciDAC and INCITE programs and the DOE Office of Advanced Scientific Computing Research core program investments in applied mathematics, computer science, and computational and networking facilities provide a nearly optimum framework for advancing computational science for DOE's Office of Science. At a broader level this framework also is benefiting the entire American scientific enterprise. As we look forward, it is clear that computational approaches will play an increasingly significant role in addressing challenging problems in basic science, energy, and environmental research. It takes many people to organize and support the SciDAC conference, and I would like to thank as many of them as possible. The backbone of the conference is the technical program; and the task of selecting, vetting, and recruiting speakers is the job of the organizing committee. I thank the members of this committee for all the hard work and the many tens of conference calls that enabled a wonderful program to be assembled. This year the following people served on the organizing committee: Jim Ahrens, LANL; David Bader, LLNL; Bryan Barnett, Microsoft; Peter Beckman, ANL; Vincent Chan, GA; Jackie Chen, SNL; Lori Diachin, LLNL; Dan Fay, Microsoft; Ian Foster, ANL; Mark Gordon, Ames; Mohammad Khaleel, PNNL; David Keyes, Columbia University; Bob Lucas, University of Southern California; Tony Mezzacappa, ORNL; Jeff Nichols, ORNL; David Nowak, ANL; Michael Papka, ANL; Thomas Schultess, ORNL; Horst Simon, LBNL; David Skinner, LBNL; Panagiotis Spentzouris, Fermilab; Bob Sugar, UCSB; and Kathy Yelick, LBNL. I owe a special thanks to Mike Papka and Jim Ahrens for handling the electronic theater. I also thank all those who submitted videos. It was a highly successful experiment. Behind the scenes an enormous amount of work is required to make a large conference go smoothly. First I thank Cheryl Zidel for her tireless efforts as organizing committee liaison and posters chair and, in general, handling all of my end of the program and keeping me calm. I also thank Gail Pieper for her work in editing the proceedings, Beth Cerny Patino for her work on the Organizing Committee website and electronic theater, and Ken Raffenetti for his work in keeping that website working. Jon Bashor and John Hules did an excellent job in handling conference communications. I thank Caitlin Youngquist for the striking graphic design; Dan Fay for tutorials arrangements; and Lynn Dory, Suzanne Stevenson, Sarah Pebelske and Sarah Zidel for on-site registration and conference support. We all owe Yeen Mankin an extra-special thanks for choosing the hotel, handling contracts, arranging menus, securing venues, and reassuring the chair that everything was under control. We are pleased to have obtained corporate sponsorship from Cray, IBM, Intel, HP, and SiCortex. I thank all the speakers and panel presenters. I also thank the former conference chairs Tony Metzzacappa, Bill Tang, and David Keyes, who were never far away for advice and encouragement. Finally, I offer my thanks to Michael Strayer, without whose leadership, vision, and persistence the SciDAC program would not have come into being and flourished. I am honored to be part of his program and his friend. Rick Stevens Seattle, Washington July 18, 2008

  7. Preface: SciDAC 2005

    NASA Astrophysics Data System (ADS)

    Mezzacappa, Anthony

    2005-01-01

    On 26-30 June 2005 at the Grand Hyatt on Union Square in San Francisco several hundred computational scientists from around the world came together for what can certainly be described as a celebration of computational science. Scientists from the SciDAC Program and scientists from other agencies and nations were joined by applied mathematicians and computer scientists to highlight the many successes in the past year where computation has led to scientific discovery in a variety of fields: lattice quantum chromodynamics, accelerator modeling, chemistry, biology, materials science, Earth and climate science, astrophysics, and combustion and fusion energy science. Also highlighted were the advances in numerical methods and computer science, and the multidisciplinary collaboration cutting across science, mathematics, and computer science that enabled these discoveries. The SciDAC Program was conceived and funded by the US Department of Energy Office of Science. It is the Office of Science's premier computational science program founded on what is arguably the perfect formula: the priority and focus is science and scientific discovery, with the understanding that the full arsenal of `enabling technologies' in applied mathematics and computer science must be brought to bear if we are to have any hope of attacking and ultimately solving today's computational Grand Challenge problems. The SciDAC Program has been in existence for four years, and many of the computational scientists funded by this program will tell you that the program has given them the hope of addressing their scientific problems in full realism for the very first time. Many of these scientists will also tell you that SciDAC has also fundamentally changed the way they do computational science. We begin this volume with one of DOE's great traditions, and core missions: energy research. As we will see, computation has been seminal to the critical advances that have been made in this arena. Of course, to understand our world, whether it is to understand its very nature or to understand it so as to control it for practical application, will require explorations on all of its scales. Computational science has been no less an important tool in this arena than it has been in the arena of energy research. From explorations of quantum chromodynamics, the fundamental theory that describes how quarks make up the protons and neutrons of which we are composed, to explorations of the complex biomolecules that are the building blocks of life, to explorations of some of the most violent phenomena in our universe and of the Universe itself, computation has provided not only significant insight, but often the only means by which we have been able to explore these complex, multicomponent systems and by which we have been able to achieve scientific discovery and understanding. While our ultimate target remains scientific discovery, it certainly can be said that at a fundamental level the world is mathematical. Equations ultimately govern the evolution of the systems of interest to us, be they physical, chemical, or biological systems. The development and choice of discretizations of these underlying equations is often a critical deciding factor in whether or not one is able to model such systems stably, faithfully, and practically, and in turn, the algorithms to solve the resultant discrete equations are the complementary, critical ingredient in the recipe to model the natural world. The use of parallel computing platforms, especially at the TeraScale, and the trend toward even larger numbers of processors, continue to present significant challenges in the development and implementation of these algorithms. Computational scientists often speak of their `workflows'. A workflow, as the name suggests, is the sum total of all complex and interlocking tasks, from simulation set up, execution, and I/O, to visualization and scientific discovery, through which the advancement in our understanding of the natural world is realized. For the computational scientist, enabling such workflows presents myriad, signiflcant challenges, and it is computer scientists that are called upon at such times to address these challenges. Simulations are currently generating data at the staggering rate of tens of TeraBytes per simulation, over the course of days. In the next few years, these data generation rates are expected to climb exponentially to hundreds of TeraBytes per simulation, performed over the course of months. The output, management, movement, analysis, and visualization of these data will be our key to unlocking the scientific discoveries buried within the data. And there is no hope of generating such data to begin with, or of scientific discovery, without stable computing platforms and a sufficiently high and sustained performance of scientific applications codes on them. Thus, scientific discovery in the realm of computational science at the TeraScale and beyond will occur at the intersection of science, applied mathematics, and computer science. The SciDAC Program was constructed to mirror this reality, and the pages that follow are a testament to the efficacy of such an approach. We would like to acknowledge the individuals on whose talents and efforts the success of SciDAC 2005 was based. Special thanks go to Betsy Riley for her work on the SciDAC 2005 Web site and meeting agenda, for lining up our corporate sponsors, for coordinating all media communications, and for her efforts in processing the proceedings contributions, to Sherry Hempfling for coordinating the overall SciDAC 2005 meeting planning, for handling a significant share of its associated communications, and for coordinating with the ORNL Conference Center and Grand Hyatt, to Angela Harris for producing many of the documents and records on which our meeting planning was based and for her efforts in coordinating with ORNL Graphics Services, to Angie Beach of the ORNL Conference Center for her efforts in procurement and setting up and executing the contracts with the hotel, and to John Bui and John Smith for their superb wireless networking and A/V set up and support. We are grateful for the relentless efforts of all of these individuals, their remarkable talents, and for the joy of working with them during this past year. They were the cornerstones of SciDAC 2005. Thanks also go to Kymba A'Hearn and Patty Boyd for on-site registration, Brittany Hagen for administrative support, Bruce Johnston for netcast support, Tim Jones for help with the proceedings and Web site, Sherry Lamb for housing and registration, Cindy Lathum for Web site design, Carolyn Peters for on-site registration, and Dami Rich for graphic design. And we would like to express our appreciation to the Oak Ridge National Laboratory, especially Jeff Nichols, the Argonne National Laboratory, the Lawrence Berkeley National Laboratory, and to our corporate sponsors, Cray, IBM, Intel, and SGI, for their support. We would like to extend special thanks also to our plenary speakers, technical speakers, poster presenters, and panelists for all of their efforts on behalf of SciDAC 2005 and for their remarkable achievements and contributions. We would like to express our deep appreciation to Lali Chatterjee, Graham Douglas and Margaret Smith of Institute of Physics Publishing, who worked tirelessly in order to provide us with this finished volume within two months, which is nothing short of miraculous. Finally, we wish to express our heartfelt thanks to Michael Strayer, SciDAC Director, whose vision it was to focus SciDAC 2005 on scientific discovery, around which all of the excitement we experienced revolved, and to our DOE SciDAC program managers, especially Fred Johnson, for their support, input, and help throughout.

  8. Opening Remarks: SciDAC 2007

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2007-09-01

    Good morning. Welcome to Boston, the home of the Red Sox, Celtics and Bruins, baked beans, tea parties, Robert Parker, and SciDAC 2007. A year ago I stood before you to share the legacy of the first SciDAC program and identify the challenges that we must address on the road to petascale computing—a road E E Cummins described as `. . . never traveled, gladly beyond any experience.' Today, I want to explore the preparations for the rapidly approaching extreme scale (X-scale) generation. These preparations are the first step propelling us along the road of burgeoning scientific discovery enabled by the application of X- scale computing. We look to petascale computing and beyond to open up a world of discovery that cuts across scientific fields and leads us to a greater understanding of not only our world, but our universe. As part of the President's America Competitiveness Initiative, the ASCR Office has been preparing a ten year vision for computing. As part of this planning the LBNL together with ORNL and ANL hosted three town hall meetings on Simulation and Modeling at the Exascale for Energy, Ecological Sustainability and Global Security (E3). The proposed E3 initiative is organized around four programmatic themes: Engaging our top scientists, engineers, computer scientists and applied mathematicians; investing in pioneering large-scale science; developing scalable analysis algorithms, and storage architectures to accelerate discovery; and accelerating the build-out and future development of the DOE open computing facilities. It is clear that we have only just started down the path to extreme scale computing. Plan to attend Thursday's session on the out-briefing and discussion of these meetings. The road to the petascale has been at best rocky. In FY07, the continuing resolution provided 12% less money for Advanced Scientific Computing than either the President, the Senate, or the House. As a consequence, many of you had to absorb a no cost extension for your SciDAC work. I am pleased that the President's FY08 budget restores the funding for SciDAC. Quoting from Advanced Scientific Computing Research description in the House Energy and Water Development Appropriations Bill for FY08, "Perhaps no other area of research at the Department is so critical to sustaining U.S. leadership in science and technology, revolutionizing the way science is done and improving research productivity." As a society we need to revolutionize our approaches to energy, environmental and global security challenges. As we go forward along the road to the X-scale generation, the use of computation will continue to be a critical tool along with theory and experiment in understanding the behavior of the fundamental components of nature as well as for fundamental discovery and exploration of the behavior of complex systems. The foundation to overcome these societal challenges will build from the experiences and knowledge gained as you, members of our SciDAC research teams, work together to attack problems at the tera- and peta- scale. If SciDAC is viewed as an experiment for revolutionizing scientific methodology, then a strategic goal of ASCR program must be to broaden the intellectual base prepared to address the challenges of the new X-scale generation of computing. We must focus our computational science experiences gained over the past five years on the opportunities introduced with extreme scale computing. Our facilities are on a path to provide the resources needed to undertake the first part of our journey. Using the newly upgraded 119 teraflop Cray XT system at the Leadership Computing Facility, SciDAC research teams have in three days performed a 100-year study of the time evolution of the atmospheric CO2 concentration originating from the land surface. The simulation of the El Nino/Southern Oscillation which was part of this study has been characterized as `the most impressive new result in ten years' gained new insight into the behavior of superheated ionic gas in the ITER reactor as a result of an AORSA run on 22,500 processors that achieved over 87 trillion calculations per second (87 teraflops) which is 74% of the system's theoretical peak. Tomorrow, Argonne and IBM will announce that the first IBM Blue Gene/P, a 100 teraflop system, will be shipped to the Argonne Leadership Computing Facility later this fiscal year. By the end of FY2007 ASCR high performance and leadership computing resources will include the 114 teraflop IBM Blue Gene/P; a 102 teraflop Cray XT4 at NERSC and a 119 teraflop Cray XT system at Oak Ridge. Before ringing in the New Year, Oak Ridge will upgrade to 250 teraflops with the replacement of the dual core processors with quad core processors and Argonne will upgrade to between 250-500 teraflops, and next year, a petascale Cray Baker system is scheduled for delivery at Oak Ridge. The multidisciplinary teams in our SciDAC Centers for Enabling Technologies and our SciDAC Institutes must continue to work with our Scientific Application teams to overcome the barriers that prevent effective use of these new systems. These challenges include: the need for new algorithms as well as operating system and runtime software and tools which scale to parallel systems composed of hundreds of thousands processors; program development environments and tools which scale effectively and provide ease of use for developers and scientific end users; and visualization and data management systems that support moving, storing, analyzing, manipulating and visualizing multi-petabytes of scientific data and objects. The SciDAC Centers, located primarily at our DOE national laboratories will take the lead in ensuring that critical computer science and applied mathematics issues are addressed in a timely and comprehensive fashion and to address issues associated with research software lifecycle. In contrast, the SciDAC Institutes, which are university-led centers of excellence, will have more flexibility to pursue new research topics through a range of research collaborations. The Institutes will also work to broaden the intellectual and researcher base—conducting short courses and summer schools to take advantage of new high performance computing capabilities. The SciDAC Outreach Center at Lawrence Berkeley National Laboratory complements the outreach efforts of the SciDAC Institutes. The Outreach Center is our clearinghouse for SciDAC activities and resources and will communicate with the high performance computing community in part to understand their needs for workshops, summer schools and institutes. SciDAC is not ASCR's only effort to broaden the computational science community needed to meet the challenges of the new X-scale generation. I hope that you were able to attend the Computational Science Graduate Fellowship poster session last night. ASCR developed the fellowship in 1991 to meet the nation's growing need for scientists and technology professionals with advanced computer skills. CSGF, now jointly funded between ASCR and NNSA, is more than a traditional academic fellowship. It has provided more than 200 of the best and brightest graduate students with guidance, support and community in preparing them as computational scientists. Today CSGF alumni are bringing their diverse top-level skills and knowledge to research teams at DOE laboratories and in industries such as Proctor and Gamble, Lockheed Martin and Intel. At universities they are working to train the next generation of computational scientists. To build on this success, we intend to develop a wholly new Early Career Principal Investigator's (ECPI) program. Our objective is to stimulate academic research in scientific areas within ASCR's purview especially among faculty in early stages of their academic careers. Last February, we lost Ken Kennedy, one of the leading lights of our community. As we move forward into the extreme computing generation, his vision and insight will be greatly missed. In memorial to Ken Kennedy, we shall designate the ECPI grants to beginning faculty in Computer Science as the Ken Kennedy Fellowship. Watch the ASCR website for more information about ECPI and other early career programs in the computational sciences. We look to you, our scientists, researchers, and visionaries to take X-scale computing and use it to explode scientific discovery in your fields. We at SciDAC will work to ensure that this tool is the sharpest and most precise and efficient instrument to carve away the unknown and reveal the most exciting secrets and stimulating scientific discoveries of our time. The partnership between research and computing is the marriage that will spur greater discovery, and as Spencer said to Susan in Robert Parker's novel, `Sudden Mischief', `We stick together long enough, and we may get as smart as hell'. Michael Strayer

  9. Opening Comments: SciDAC 2008

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2008-07-01

    Welcome to Seattle and the 2008 SciDAC Conference. This conference, the fourth in the series, is a continuation of the PI meetings we first began under SciDAC-1. I would like to start by thanking the organizing committee, and Rick Stevens in particular, for organizing this year's meeting. This morning I would like to look briefly at SciDAC, to give you a brief history of SciDAC and also look ahead to see where we plan to go over the next few years. I think the best description of SciDAC, at least the simulation part, comes from a quote from Dr Ray Orbach, DOE's Under Secretary for Science and Director of the Office of Science. In an interview that appeared in the SciDAC Review magazine, Dr Orbach said, `SciDAC is unique in the world. There isn't any other program like it anywhere else, and it has the remarkable ability to do science by bringing together physical scientists, mathematicians, applied mathematicians, and computer scientists who recognize that computation is not something you do at the end, but rather it needs to be built into the solution of the very problem that one is addressing'. Of course, that is extended not just to physical scientists, but also to biological scientists. This is a theme of computational science, this partnership among disciplines, which goes all the way back to the early 1980s and Ken Wilson. It's a unique thread within the Department of Energy. SciDAC-1, launched around the turn of the millennium, created a new generation of scientific simulation codes. It advocated building out mathematical and computing system software in support of science and a new collaboratory software environment for data. The original concept for SciDAC-1 had topical centers for the execution of the various science codes, but several corrections and adjustments were needed. The ASCR scientific computing infrastructure was also upgraded, providing the hardware facilities for the program. The computing facility that we had at that time was the big 3 teraflop/s center at NERSC and that had to be shared with the programmatic side supporting research across DOE. At the time, ESnet was just slightly over half a gig per sec of bandwidth; and the science being addressed was accelerator science, climate, chemistry, fusion, astrophysics, materials science, and QCD. We built out the national collaboratories from the ASCR office, and in addition we built Integrated Software Infrastructure Centers (ISICs). Of these, three were in applied mathematics, four in computer science (including a performance evaluation research center), and four were collaboratories or Grid projects having to do with data management. For science, there were remarkable breakthroughs in simulation, such as full 3D laboratory scale flame simulation. There were also significant improvements in application codes - from factors of almost 3 to more than 100 - and code improvement as people began to realize they had to integrate mathematics tools and computer science tools into their codes to take advantage of the parallelism of the day. The SciDAC data-mining tool, Sapphire, received a 2006 R&D 100 award. And the community as a whole worked well together and began building a publication record that was substantial. In 2006, we recompeted the program with similar goals - SciDAC-1 was very successful, and we wanted to continue that success and extend what was happening under SciDAC to the broader science community. We opened up the partnership to all of the Offices of Science and the NSF and the NNSA. The goal was to create comprehensive scientific computing software and the infrastructure for the software to enable scientific discovery in the physical, biological, and environmental sciences and take the simulations to an extreme scale, in this case petascale. We would also build out a new generation of data management tools. What we observed during SciDAC-1 was that the data and the data communities - both experimental data from large experimental facilities and observational data, along with simulation data - were expanding at a rate significantly faster than Moore's law. In the past few weeks, the FastBit indexing technology software tool for data analyses and data mining developed under SciDAC's Scientific Data Management project was recognized with an R&D 100 Award, selected by an independent judging panel and editors of R&D Magazine as one of the 100 most technologically significant products introduced into the marketplace over the past year. For SciDAC-2 we had nearly 250 proposals requesting a total of slightly over 1 billion in funding. Of course, we had nowhere near 1 billion. The facilities and the science we ended up with were not significantly different from what we had in SciDAC-1. But we had put in place substantially increased facilities for science. When SciDAC-1 was originally executed with the facilities at NERSC, there was significant impact on the resources at NERSC, because not only did we have an expanding portfolio of programmatic science, but we had the SciDAC projects that also needed to run at NERSC. Suddenly, NERSC was incredibly oversubscribed. With SciDAC-2, we had in place leadership-class computing facilities at Argonne with slightly more than half a petaflop and at Oak Ridge with slightly more than a quarter petaflop with an upgrade planned at the end of this year for a petaflop. And we increased the production computing capacity at NERSC to 104 teraflop/s just so that we would not impact the programmatic research and so that we would have a startup facility for SciDAC. At the end of the summer, NERSC will be at 360 teraflop/s. Both the Oak Ridge system and the principal resource at NERSC are Cray systems; Argonne has a different architecture, an IBM Blue Gene/P. At the same time, ESnet has been built out, and we are on a path where we will have dual rings around the country, from 10 to 40 gigabits per second - a factor of 20 to 80 over what was available during SciDAC-1. The science areas include accelerator science and simulation, astrophysics, climate modeling and simulation, computational biology, fusion science, high-energy physics, petabyte high-energy/ nuclear physics, materials science and chemistry, nuclear physics, QCD, radiation transport, turbulence, and groundwater reactive transport modeling and simulation. They were supported by new enabling technology centers and university-based institutes to develop an educational thread for the SciDAC program. There were four mathematics projects and four computer science projects; and under data management, we see a significant difference in that we are bringing up new visualization projects to support and sustain data-intensive science. When we look at the budgets, we see growth in the budget from just under 60 million for SciDAC-1 to just over 80 for SciDAC-2. Part of the growth is due to bringing in NSF and NNSA as new partners, and some of the growth is due to some program offices increasing their investment in SciDAC, while other program offices are constant or have decreased their investment. This is not a reflection of their priorities per se but, rather, a reflection of the budget process and the difficult times in Washington during the past two years. New activities are under way in SciDAC - the annual PI meeting has turned into what I would describe as the premier interdisciplinary computational science meeting, one of the best in the world. Doing interdisciplinary meetings is difficult because people tend to develop a focus for their particular subject area. But this is the fourth in the series; and since the first meeting in San Francisco, these conferences have been remarkably successful. For SciDAC-2 we also created an outreach magazine, SciDAC Review, which highlights scientific discovery as well as high-performance computing. It's been very successful in telling the non-practitioners what SciDAC and computational science are all about. The other new instrument in SciDAC-2 is an outreach center. As we go from computing at the terascale to computing at the petascale, we face the problem of narrowing our research community. The number of people who are `literate' enough to compute at the terascale is more than the number of those who can compute at the petascale. To address this problem, we established the SciDAC Outreach Center to bring people into the fold and educate them as to how we do SciDAC, how the teams are composed, and what it really means to compute at scale. The resources I have mentioned don't come for free. As part of the HECRTF law of 2005, Congress mandated that the Secretary would ensure that leadership-class facilities would be open to everyone across all agencies. So we took Congress at its word, and INCITE is our instrument for making allocations at the leadership-class facilities at Argonne and Oak Ridge, as well as smaller allocations at NERSC. Therefore, the selected proposals are very large projects that are computationally intensive, that compute at scale, and that have a high science impact. An important feature is that INCITE is completely open to anyone - there is no requirement of DOE Office of Science funding, and proposals are rigorously reviewed for both the science and the computational readiness. In 2008, more than 100 proposals were received, requesting about 600 million processor-hours. We allocated just over a quarter of a billion processor-hours. Astrophysics, materials science, lattice gauge theory, and high energy and nuclear physics were the major areas. These were the teams that were computationally ready for the big machines and that had significant science they could identify. In 2009, there will be a significant increase amount of time to be allocated, over half a billion processor-hours. The deadline is August 11 for new proposals and September 12 for renewals. We anticipate a significant increase in the number of requests this year. We expect you - as successful SciDAC centers, institutes, or partnerships - to compete for and win INCITE program allocation awards. If you have a successful SciDAC proposal, we believe it will make you successful in the INCITE review. We have the expectation that you will among those most prepared and most ready to use the machines and to compute at scale. Over the past 18 months, we have assembled a team to look across our computational science portfolio and to judge what are the 10 most significant science accomplishments. The ASCR office, as it goes forward with OMB, the new administration, and Congress, will be judged by the science we have accomplished. All of our proposals - such as for increasing SciDAC, increasing applied mathematics, and so on - are tied to what have we accomplished in science. And so these 10 big accomplishments are key to establishing credibility for new budget requests. Tony Mezzacappa, who chaired the committee, will also give a presentation on the ranking of these top 10, how they got there, and what the science is all about. Here is the list - numbers 2, 5, 6, 7, 9, and 10 are all SciDAC projects. RankTitle 1Modeling the Molecular Basis of Parkinson's Disease (Tsigelny) 2Discovery of the Standing Accretion Shock Instability and Pulsar Birth Mechanism in a Core-Collapse Supernova Evolution and Explosion (Blondin) 3Prediction and Design of Macromolecular Structures and Functions (Baker) 4Understanding How Lifted Flame Stabilized in a Hot Coflow (Yoo) 5New Insights from LCF-enabled Advanced Kinetic Simulations of Global Turbulence in Fusion Systems (Tang) 6High Transition Temperature Superconductivity: A High-Temperature Superconductive State and a Pairing Mechanism in 2-D Hubbard Model (Scalapino) 7 PETsc: Providing the Solvers for DOE High-Performance Simulations (Smith) 8 Via Lactea II, A Billion Particle Simulation of the Dark Matter Halo of the Milky Way (Madau) 9Probing the Properties of Water through Advanced Computing (Galli) 10First Provably Scalable Maxwell Solver Enables Scalable Electromagnetic Simulations (Kovel) So, what's the future going to look like for us? The office is putting together an initiative with the community, which we call the E3 Initiative. We're looking for a 10-year horizon for what's going to happen. Through the series of town hall meetings, which many of you participated in, we have produced a document on `Transforming Energy, the Environment and Science through simulations at the eXtreme Scale'; it can be found at http://www.science.doe.gov/ascr/ProgramDocuments/TownHall.pdf . We sometimes call it the Exascale initiative. Exascale computing is the gold-ring level of computing that seems just out of reach; but if we work hard and stretch, we just might be able to reach it. We envision that there will be a SciDAC-X, working at the extreme scale, with SciDAC teams that will perform and carry out science in the areas that will have a great societal impact, such as alternative fuels and transportation, combustion, climate, fusion science, high-energy physics, advanced fuel cycles, carbon management, and groundwater. We envision institutes for applied mathematics and computer science that probably will segue into algorithms because, at the extreme scale, we see the distinction between the applied math and the algorithm per se and its implementation in computer science as being inseparable. We envision an INCITE-X with multi-petaflop platforms, perhaps even exaflop computing resources. ESnet will be best in class - our 10-year plan calls for having 400 terabits per second capacity available in dual rings around the country, an enormously fast data communications network for moving large amounts of data. In looking at where we've been and where we are going, we can see that the gigaflops and teraflops era was a regime where we were following Moore's law through advances in clock speed. In the current regime, we're introducing massive parallelism, which I think is exemplified by Intel's announcement of their teraflop chip, where they envision more than a thousand cores on a chip. But in order to reach exascale, extrapolations talk about machines that require 100 megawatts of power in terms of current architectures. It's clearly going to require novel architectures, things we have perhaps not yet envisioned. It is of course an era of challenge. There will be an unpredictable evolution of hardware if we are to reach the exascale; and there will clearly be multilevel heterogeneous parallelism, including multilevel memory hierarchies. We have no idea right now as to the programming models needed to execute at such an extreme scale. We have been incredibly successful at the petascale - we know that already. Managing data and just getting communications to scale is an enormous challenge. And it's not just the extreme scaling. It's the rapid increase in complexity that represents the challenge. Let me end with a metaphor. In previous meetings we have talked about the road to petascale. Indeed, we have seen in hindsight that it was a road well traveled. But perhaps the road to exascale is not a road at all. Perhaps the metaphor will be akin to scaling the south face of K2. That's clearly not something all of us will be able to do, and probably computing at the exascale is not something all of us will do. But if we achieve that goal, perhaps the words of Emily Dickinson will best summarize where we will be. Perhaps in her words, looking backward and down, you will say: I climb the `Hill of Science' I view the landscape o'er; Such transcendental prospect I ne'er beheld before!

  10. OPENING REMARKS: SciDAC: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2005-01-01

    Good morning. Welcome to SciDAC 2005 and San Francisco. SciDAC is all about computational science and scientific discovery. In a large sense, computational science characterizes SciDAC and its intent is change. It transforms both our approach and our understanding of science. It opens new doors and crosses traditional boundaries while seeking discovery. In terms of twentieth century methodologies, computational science may be said to be transformational. There are a number of examples to this point. First are the sciences that encompass climate modeling. The application of computational science has in essence created the field of climate modeling. This community is now international in scope and has provided precision results that are challenging our understanding of our environment. A second example is that of lattice quantum chromodynamics. Lattice QCD, while adding precision and insight to our fundamental understanding of strong interaction dynamics, has transformed our approach to particle and nuclear science. The individual investigator approach has evolved to teams of scientists from different disciplines working side-by-side towards a common goal. SciDAC is also undergoing a transformation. This meeting is a prime example. Last year it was a small programmatic meeting tracking progress in SciDAC. This year, we have a major computational science meeting with a variety of disciplines and enabling technologies represented. SciDAC 2005 should position itself as a new corner stone for Computational Science and its impact on science. As we look to the immediate future, FY2006 will bring a new cycle to SciDAC. Most of the program elements of SciDAC will be re-competed in FY2006. The re-competition will involve new instruments for computational science, new approaches for collaboration, as well as new disciplines. There will be new opportunities for virtual experiments in carbon sequestration, fusion, and nuclear power and nuclear waste, as well as collaborations with industry and virtual prototyping. New instruments of collaboration will include institutes and centers while summer schools, workshops and outreach will invite new talent and expertise. Computational science adds new dimensions to science and its practice. Disciplines of fusion, accelerator science, and combustion are poised to blur the boundaries between pure and applied science. As we open the door into FY2006 we shall see a landscape of new scientific challenges: in biology, chemistry, materials, and astrophysics to name a few. The enabling technologies of SciDAC have been transformational as drivers of change. Planning for major new software systems assumes a base line employing Common Component Architectures and this has become a household word for new software projects. While grid algorithms and mesh refinement software have transformed applications software, data management and visualization have transformed our understanding of science from data. The Gordon Bell prize now seems to be dominated by computational science and solvers developed by TOPS ISIC. The priorities of the Office of Science in the Department of Energy are clear. The 20 year facilities plan is driven by new science. High performance computing is placed amongst the two highest priorities. Moore's law says that by the end of the next cycle of SciDAC we shall have peta-flop computers. The challenges of petascale computing are enormous. These and the associated computational science are the highest priorities for computing within the Office of Science. Our effort in Leadership Class computing is just a first step towards this goal. Clearly, computational science at this scale will face enormous challenges and possibilities. Performance evaluation and prediction will be critical to unraveling the needed software technologies. We must not lose sight of our overarching goal—that of scientific discovery. Science does not stand still and the landscape of science discovery and computing holds immense promise. In this environment, I believe it is necessary to institute a system of science based performance metrics to help quantify our progress towards science goals and scientific computing. As a final comment I would like to reaffirm that the shifting landscapes of science will force changes to our computational sciences, and leave you with the quote from Richard Hamming, 'The purpose of computing is insight, not numbers'.

  11. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Panagiotis; /Fermilab; Cary, John

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less

  12. Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Robert

    2013-04-20

    Enhancing the performance of SciDAC applications on petascale systems had high priority within DOE SC at the start of the second phase of the SciDAC program, SciDAC-2, as it continues to do so today. Achieving expected levels of performance on high-end computing (HEC) systems is growing ever more challenging due to enormous scale, increasing architectural complexity, and increasing application complexity. To address these challenges, the University of Southern California?s Information Sciences Institute organized the Performance Engineering Research Institute (PERI). PERI implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineeringmore » of high profile applications. Within PERI, USC?s primary research activity was automatic tuning (autotuning) of scientific software. This activity was spurred by the strong user preference for automatic tools and was based on previous successful activities such as ATLAS, which automatically tuned components of the LAPACK linear algebra library, and other recent work on autotuning domain-specific libraries. Our other major component was application engagement, to which we devoted approximately 30% of our effort to work directly with SciDAC-2 applications. This report is a summary of the overall results of the USC PERI effort.« less

  13. Opening Comments: SciDAC 2009

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2009-07-01

    Welcome to San Diego and the 2009 SciDAC conference. Over the next four days, I would like to present an assessment of the SciDAC program. We will look at where we've been, how we got to where we are and where we are going in the future. Our vision is to be first in computational science, to be best in class in modeling and simulation. When Ray Orbach asked me what I would do, in my job interview for the SciDAC Director position, I said we would achieve that vision. And with our collective dedicated efforts, we have managed to achieve this vision. In the last year, we have now the most powerful supercomputer for open science, Jaguar, the Cray XT system at the Oak Ridge Leadership Computing Facility (OLCF). We also have NERSC, probably the best-in-the-world program for productivity in science that the Office of Science so depends on. And the Argonne Leadership Computing Facility offers architectural diversity with its IBM Blue Gene/P system as a counterbalance to Oak Ridge. There is also ESnet, which is often understated—the 40 gigabit per second dual backbone ring that connects all the labs and many DOE sites. In the President's Recovery Act funding, there is exciting news that ESnet is going to build out to a 100 gigabit per second network using new optical technologies. This is very exciting news for simulations and large-scale scientific facilities. But as one noted SciDAC luminary said, it's not all about the computers—it's also about the science—and we are also achieving our vision in this area. Together with having the fastest supercomputer for science, at the SC08 conference, SciDAC researchers won two ACM Gordon Bell Prizes for the outstanding performance of their applications. The DCA++ code, which solves some very interesting problems in materials, achieved a sustained performance of 1.3 petaflops, an astounding result and a mark I suspect will last for some time. The LS3DF application for studying nanomaterials also required the development of a new and novel algorithm to produce results up to 400 times faster than a similar application, and was recognized with a prize for algorithm innovation—a remarkable achievement. Day one of our conference will include examples of petascale science enabled at the OLCF. Although Jaguar has not been officially commissioned, it has gone through its acceptance tests, and during its shakedown phase there have been pioneer applications used for the acceptance tests, and they are running at scale. These include applications in the areas of astrophysics, biology, chemistry, combustion, fusion, geosciences, materials science, nuclear energy and nuclear physics. We also have a whole compendium of science we do at our facilities; these have been documented and reviewed at our last SciDAC conference. Many of these were highlighted in our Breakthroughs Report. One session at this week's conference will feature a cross-section of these breakthroughs. In the area of scalable electromagnetic simulations, the Auxiliary-space Maxwell Solver (AMS) uses specialized finite element discretizations and multigrid-based techniques, which decompose the original problem into easier-to-solve subproblems. Congratulations to the mathematicians on this. Another application on the list of breakthroughs was the authentication of PETSc, which provides scalable solvers used in many DOE applications and has solved problems with over 3 billion unknowns and scaled to over 16,000 processors on DOE leadership-class computers. This is becoming a very versatile and useful toolkit to achieve performance at scale. With the announcement of SIAM's first class of Fellows, we are remarkably well represented. Of the group of 191, more than 40 of these Fellows are in the 'DOE space.' We are so delighted that SIAM has recognized them for their many achievements. In the coming months, we will illustrate our leadership in applied math and computer science by looking at our contributions in the areas of programming models, development and performance tools, math libraries, system software, collaboration, and visualization and data analytics. This is a large and diverse list of libraries. We have asked for two panels, one chaired by David Keyes and composed of many of the nation's leading mathematicians, to produce a report on the most significant accomplishments in applied mathematics over the last eight years, taking us back to the start of the SciDAC program. In addition, we have a similar panel in computer science to be chaired by Kathy Yelick. They are going to identify the computer science accomplishments of the past eight years. These accomplishments are difficult to get a handle on, and I'm looking forward to this report. We will also have a follow-on to our report on breakthroughs in computational science and this will also go back eight years, looking at the many accomplishments under the SciDAC and INCITE programs. This will be chaired by Tony Mezzacappa. So, where are we going in the SciDAC program? It might help to take a look at computational science and how it got started. I go back to Ken Wilson, who made the model and has written on computational science and computational science education. His model was thus: The computational scientist plays the role of the experimentalist, and the math and CS researchers play the role of theorists, and the computers themselves are the experimental apparatus. And that in simulation science, we are carrying out numerical experiments as to the nature of physical and biological sciences. Peter Lax, in the same time frame, developed a report on large-scale computing in science and engineering. Peter remarked, 'Perhaps the most important applications of scientific computing come not in the solution of old problems, but in the discovery of new phenomena through numerical experimentation.' And in the early years, I think the person who provided the most guidance, the most innovation and the most vision for where the future might lie was Ed Oliver. Ed Oliver died last year. Ed did a number of things in science. He had this personality where he knew exactly what to do, but he preferred to stay out of the limelight so that others could enjoy the fruits of his vision. We in the SciDAC program and ASCR Facilities are still enjoying the benefits of his vision. We will miss him. Twenty years after Ken Wilson, Ray Orbach laid out the fundamental premise for SciDAC in an interview that appeared in SciDAC Review: 'SciDAC is unique in the world. There isn't any other program like it anywhere else, and it has the remarkable ability to do science by bringing together physical scientists, mathematicians, applied mathematicians, and computer scientists who recognize that computation is not something you do at the end, but rather it needs to be built into the solution of the very problem that one is addressing. ' As you look at the Lax report from 1982, it talks about how 'Future significant improvements may have to come from architectures embodying parallel processing elements—perhaps several thousands of processors.' And it continues, 'esearch in languages, algorithms and numerical analysis will be crucial in learning to exploit these new architectures fully.' In the early '90s, Sterling, Messina and Smith developed a workshop report on petascale computing and concluded, 'A petaflops computer system will be feasible in two decades, or less, and rely in part on the continual advancement of the semiconductor industry both in speed enhancement and cost reduction through improved fabrication processes.' So they were not wrong, and today we are embarking on a forward look that is at a different scale, the exascale, going to 1018 flops. In 2007, Stevens, Simon and Zacharia chaired a series of town hall meetings looking at exascale computing, and in their report wrote, 'Exascale computer systems are expected to be technologically feasible within the next 15 years, or perhaps sooner. These systems will push the envelope in a number of important technologies: processor architecture, scale of multicore integration, power management and packaging.' The concept of computing on the Jaguar computer involves hundreds of thousands of cores, as do the IBM systems that are currently out there. So the scale of computing with systems with billions of processors is staggering to me, and I don't know how the software and math folks feel about it. We have now embarked on a road toward extreme scale computing. We have created a series of town hall meetings and we are now in the process of holding workshops that address what I call within the DOE speak 'the mission need,' or what is the scientific justification for computing at that scale. We are going to have a total of 13 workshops. The workshops on climate, high energy physics, nuclear physics, fusion, and nuclear energy have been held. The report from the workshop on climate is actually out and available, and the other reports are being completed. The upcoming workshops are on biology, materials, and chemistry; and workshops that engage science for nuclear security are a partnership between NNSA and ASCR. There are additional workshops on applied math, computer science, and architecture that are needed for computing at the exascale. These extreme scale workshops will provide the foundation in our office, the Office of Science, the NNSA and DOE, and we will engage the National Science Foundation and the Department of Defense as partners. We envision a 10-year program for an exascale initiative. It will be an integrated R&D program initially—you can think about five years for research and development—that would be in hardware, operating systems, file systems, networking and so on, as well as software for applications. Application software and the operating system and the hardware all need to be bundled in this period so that at the end the system will execute the science applications at scale. We also believe that this process will have to have considerable investment from the manufacturers and vendors to be successful. We have formed laboratory, university and industry working groups to start this process and formed a panel to look at where SciDAC needs to go to compute at the extreme scale, and we have formed an executive committee within the Office of Science and the NNSA to focus on these activities. We will have outreach to DoD in the next few months. We are anticipating a solicitation within the next two years in which we will compete this bundled R&D process. We don't know how we will incorporate SciDAC into extreme scale computing, but we do know there will be many challenges. And as we have shown over the years, we have the expertise and determination to surmount these challenges.

  14. OPENING REMARKS: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such as the national and regional electricity grid, carbon sequestration, virtual engineering, and the nuclear fuel cycle. The successes of the first five years of SciDAC have demonstrated the power of using advanced computing to enable scientific discovery. One measure of this success could be found in the President’s State of the Union address in which President Bush identified ‘supercomputing’ as a major focus area of the American Competitiveness Initiative. Funds were provided in the FY 2007 President’s Budget request to increase the size of the NERSC-5 procurement to between 100-150 teraflops, to upgrade the LCF Cray XT3 at Oak Ridge to 250 teraflops and acquire a 100 teraflop IBM BlueGene/P to establish the Leadership computing facility at Argonne. We believe that we are on a path to establish a petascale computing resource for open science by 2009. We must develop software tools, packages, and libraries as well as the scientific application software that will scale to hundreds of thousands of processors. Computer scientists from universities and the DOE’s national laboratories will be asked to collaborate on the development of the critical system software components such as compilers, light-weight operating systems and file systems. Standing up these large machines will not be business as usual for ASCR. We intend to develop a series of interconnected projects that identify cost, schedule, risks, and scope for the upgrades at the LCF at Oak Ridge, the establishment of the LCF at Argonne, and the development of the software to support these high-end computers. The critical first step in defining the scope of the project is to identify a set of early application codes for each leadership class computing facility. These codes will have access to the resources during the commissioning phase of the facility projects and will be part of the acceptance tests for the machines. Applications will be selected, in part, by breakthrough science, scalability, and ability to exercise key hardware and software components. Possible early applications might include climate models; studies of the magnetic properties of nanoparticles as they relate to ultra-high density storage media; the rational design of chemical catalysts, the modeling of combustion processes that will lead to cleaner burning coal, and fusion and astrophysics research. I have presented just a few of the challenges that we look forward to on the road to petascale computing. Our road to petascale science might be paraphrased by the quote from e e cummings, ‘somewhere I have never traveled, gladly beyond any experience . . .’

  15. The scaling issue: scientific opportunities

    NASA Astrophysics Data System (ADS)

    Orbach, Raymond L.

    2009-07-01

    A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.

  16. Contributions to the NUCLEI SciDAC-3 Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogner, Scott; Nazarewicz, Witek

    This is the Final Report for Michigan State University for the NUCLEI SciDAC-3 project. The NUCLEI project, as defined by the scope of work, has developed, implemented and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics studied included the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques used included Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program emphasized areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS at ANL and FRIB at MSU (nuclear structuremore » and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrinoless double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).« less

  17. Sustaining and Extending the Open Science Grid: Science Innovation on a PetaScale Nationwide Facility (DE-FC02-06ER41436) SciDAC-2 Closeout Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron; Shank, James; Ernst, Michael

    Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less

  18. Harnessing the power of emerging petascale platforms

    NASA Astrophysics Data System (ADS)

    Mellor-Crummey, John

    2007-07-01

    As part of the US Department of Energy's Scientific Discovery through Advanced Computing (SciDAC-2) program, science teams are tackling problems that require computational simulation and modeling at the petascale. A grand challenge for computer science is to develop software technology that makes it easier to harness the power of these systems to aid scientific discovery. As part of its activities, the SciDAC-2 Center for Scalable Application Development Software (CScADS) is building open source software tools to support efficient scientific computing on the emerging leadership-class platforms. In this paper, we describe two tools for performance analysis and tuning that are being developed as part of CScADS: a tool for analyzing scalability and performance, and a tool for optimizing loop nests for better node performance. We motivate these tools by showing how they apply to S3D, a turbulent combustion code under development at Sandia National Laboratory. For S3D, our node performance analysis tool helped uncover several performance bottlenecks. Using our loop nest optimization tool, we transformed S3D's most costly loop nest to reduce execution time by a factor of 2.94 for a processor working on a 503 domain.

  19. The Secret Life of Quarks, Final Report for the University of North Carolina at Chapel Hill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, Robert J.

    This final report summarizes activities and results at the University of North Carolina as part of the the SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Quantum Chromodynamics. The overall objective of the project is to construct the software needed to study quantum chromo- dynamics (QCD), the theory of the strong interactions of subatomic physics, and similar strongly coupled gauge theories anticipated to be of importance in the LHC era. It built upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API)more » was developed that enables lat- tice gauge theorists to make effective use of a wide variety of massively parallel computers. In the SciDAC-2 project, optimized versions of the QCD API were being created for the IBM Blue- Gene/L (BG/L) and BlueGene/P (BG/P), the Cray XT3/XT4 and its successors, and clusters based on multi-core processors and Infiniband communications networks. The QCD API is being used to enhance the performance of the major QCD community codes and to create new applications. Software libraries of physics tools have been expanded to contain sharable building blocks for inclusion in application codes, performance analysis and visualization tools, and software for au- tomation of physics work flow. New software tools were designed for managing the large data sets generated in lattice QCD simulations, and for sharing them through the International Lattice Data Grid consortium. As part of the overall project, researchers at UNC were funded through ASCR to work in three general areas. The main thrust has been performance instrumentation and analysis in support of the SciDAC QCD code base as it evolved and as it moved to new computation platforms. In support of the performance activities, performance data was to be collected in a database for the purpose of broader analysis. Third, the UNC work was done at RENCI (Renaissance Computing Institute), which has extensive expertise and facilities for scientific data visualization, so we acted in an ongoing consulting and support role in that area.« less

  20. SciDAC-3: Searching for Physics Beyond the Standard Model, University of Arizona component, Year 2 progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toussaint, Doug

    2014-03-21

    The Arizona component of the SciDAC-3 Lattice Gauge Theory program consisted of partial support for a postdoctoral position. In the original budget this covered three fourths of a postdoc, but the University of Arizona changed its ERE rate for postdoctoral positions from 4.3% to 21%, so the support level was closer to two-thirds of a postdoc. The grant covered the work of postdoc Thomas Primer. Dr. Primer's first task was an urgent one, although it was not forseen in our proposed work. It turned out that on the large lattices used in some of our current computations the gauge fixingmore » code was not working as expected, and this revealed itself in inconsistent results in the correlators needed to compute the semileptonic form factors for K and D decays. Dr. Primer participated in the effort to understand this problem and to modify our codes to deal with the large lattices we are now generating (as large as 144 3 x 288). Corrected code was incorporated in our standard codes, and workarounds that allow us to use the correlators already computed with the unexpected gauge fixing were been implemented.« less

  1. The grand challenge of managing the petascale facility.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, R. J.; Mathematics and Computer Science

    2007-02-28

    This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, wemore » should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.« less

  2. 77 FR 45345 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-31

    ... Recompetition results for Scientific Discovery through Advanced Computing (SciDAC) applications Co-design Public... DEPARTMENT OF ENERGY DOE/Advanced Scientific Computing Advisory Committee AGENCY: Office of... the Advanced Scientific Computing Advisory Committee (ASCAC). The Federal Advisory Committee Act (Pub...

  3. Extraordinary tools for extraordinary science: the impact of SciDAC on accelerator science and technology

    NASA Astrophysics Data System (ADS)

    Ryne, Robert D.

    2006-09-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.

  4. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, P.; /Fermilab; Cary, J.

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less

  5. Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics (LQCD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negele, John W.

    Building on the success of two preceding generations of Scientific Discovery through Advanced Computing (SciDAC) projects, this grant supported the MIT component (P.I. John Negele) of a multi-institutional SciDAC-3 project that also included Brookhaven National Laboratory, the lead laboratory with P. I. Frithjof Karsch serving as Project Director, Thomas Jefferson National Accelerator Facility with P. I. David Richards serving as Co-director, University of Washington with P. I. Martin Savage, University of North Carolina with P. I. Rob Fowler, and College of William and Mary with P. I. Andreas Stathopoulos. Nationally, this multi-institutional project coordinated the software development effort that themore » nuclear physics lattice QCD community needs to ensure that lattice calculations can make optimal use of forthcoming leadership-class and dedicated hardware, including that at the national laboratories, and to exploit future computational resources in the Exascale era.« less

  6. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostadin, Damevski

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less

  7. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less

  8. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  9. The impact of SciDAC on US climate change research and the IPCCAR4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wehner, Michael

    2005-07-08

    SciDAC has invested heavily in climate change research. We offer a candid opinion as to the impact of the DOE laboratories' SciDAC projects on the upcoming Fourth Assessment Report of the Intergovernmental Panel on Climate Change. As a result of the direct importance of climate change to society, climate change research is highly coordinated at the international level. The Intergovernmental Panel on Climate Change (IPCC) is charged with providing regular reports on the state of climate change research to government policymakers. These reports are the product of thousands of scientists efforts. A series of reviews involving both scientists and policymakersmore » make them among the most reviewed documents produced in any scientific field. The high profile of these reports acts a driver to many researchers in the climate sciences. The Fourth Assessment Report (AR4) is scheduled to be released in 2007. SciDAC sponsored research has enabled the United States climate modeling community to make significant contributions to this report. Two large multi-Laboratory SciDAC projects are directly relevant to the activities of the IPCC. The first, entitled ''Collaborative Design and Development of the Community Climate System Model for Terascale Computers'', has made important software contributions to the recently released third version of the Community Climate System Model (CCSM3.0) developed at the National Center for Atmospheric Research. This is a multi-institutional project involving Los Alamos National Laboratory, Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, Pacific Northwest National Laboratory, Argonne National Laboratory, Lawrence Livermore National Laboratory and the National Center for Atmospheric Research. The original principal investigators were Robert Malone and John B. Drake. The current principal investigators are Phil Jones and John B. Drake. The second project, entitled ''Earth System Grid II: Turning Climate Datasets into Community Resources'' aims to facilitate the distribution of the copious amounts of data produced by coupled climate model integrations to the general scientific community. This is also a multi-institutional project involving Argonne National Laboratory, Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory and the National Center for Atmospheric Research. The principal investigators are Ian Foster, Don Middleton and Dean Williams. Perhaps most significant among the activities of the ''Collaborative Design'', project was the development of an efficient multi-processor coupling package. CCSM3.0 is an extraordinarily complicated physics code. The fully coupled model consists of separate submodels of the atmosphere, ocean, sea ice and land. In addition, comprehensive biogeochemistry and atmospheric chemistry submodels are under intensive current development. Each of these submodels is a large and sophisticated program in its own right. Furthermore, in the coupled model, each of the submodels, including the coupler, is a separate multiprocessor executable program. The coupler package must efficiently coordinate the communication as well as interpolate or aggregate information between these programs. This regridding function is necessary because each major subsystem (air, water or surface) is allowed to have its own independent grid.« less

  10. Final Technical Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knio, Omar M.

    QUEST is a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, University of Southern California, Massachusetts Institute of Technology, University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The Duke effort focused on the development of algorithms and utility software for non-intrusive sparse UQ representations, and on participation in the organization of annual workshops and tutorials to disseminate UQ tools to the community, and to gather inputmore » in order to adapt approaches to the needs of SciDAC customers. In particular, fundamental developments were made in (a) multiscale stochastic preconditioners, (b) gradient-based approaches to inverse problems, (c) adaptive pseudo-spectral approximations, (d) stochastic limit cycles, and (e) sensitivity analysis tools for noisy systems. In addition, large-scale demonstrations were performed, namely in the context of ocean general circulation models.« less

  11. Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanem, Roger

    QUEST was a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, the Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The USC effort centered on the development of reduced models and efficient algorithms for implementing various components of the UQ pipeline. USC personnel were responsible for the development of adaptive bases, adaptive quadrature, and reduced modelsmore » to be used in estimation and inference.« less

  12. Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Mary

    2014-09-19

    Enhancing the performance of SciDAC applications on petascale systems has high priority within DOE SC. As we look to the future, achieving expected levels of performance on high-end com-puting (HEC) systems is growing ever more challenging due to enormous scale, increasing archi-tectural complexity, and increasing application complexity. To address these challenges, PERI has implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineering of high profile applications. The PERI performance modeling and prediction activity is developing and refining performance models, significantly reducing the cost of collecting the data upon whichmore » the models are based, and increasing model fidelity, speed and generality. Our primary research activity is automatic tuning (autotuning) of scientific software. This activity is spurred by the strong user preference for automatic tools and is based on previous successful activities such as ATLAS, which has automatically tuned components of the LAPACK linear algebra library, and other re-cent work on autotuning domain-specific libraries. Our third major component is application en-gagement, to which we are devoting approximately 30% of our effort to work directly with Sci-DAC-2 applications. This last activity not only helps DOE scientists meet their near-term per-formance goals, but also helps keep PERI research focused on the real challenges facing DOE computational scientists as they enter the Petascale Era.« less

  13. Nuclear Computational Low Energy Initiative (NUCLEI)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddy, Sanjay K.

    This is the final report for University of Washington for the NUCLEI SciDAC-3. The NUCLEI -project, as defined by the scope of work, will develop, implement and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics to be studied include the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques to be used include Quantum Monte Carlo, Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program will emphasize areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS andmore » FRIB (nuclear structure and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrino-less double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).« less

  14. SciDAC Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhihong

    2013-12-18

    During the first year of the SciDAC gyrokinetic particle simulation (GPS) project, the GPS team (Zhihong Lin, Liu Chen, Yasutaro Nishimura, and Igor Holod) at the University of California, Irvine (UCI) studied the tokamak electron transport driven by electron temperature gradient (ETG) turbulence, and by trapped electron mode (TEM) turbulence and ion temperature gradient (ITG) turbulence with kinetic electron effects, extended our studies of ITG turbulence spreading to core-edge coupling. We have developed and optimized an elliptic solver using finite element method (FEM), which enables the implementation of advanced kinetic electron models (split-weight scheme and hybrid model) in the SciDACmore » GPS production code GTC. The GTC code has been ported and optimized on both scalar and vector parallel computer architectures, and is being transformed into objected-oriented style to facilitate collaborative code development. During this period, the UCI team members presented 11 invited talks at major national and international conferences, published 22 papers in peer-reviewed journals and 10 papers in conference proceedings. The UCI hosted the annual SciDAC Workshop on Plasma Turbulence sponsored by the GPS Center, 2005-2007. The workshop was attended by about fifties US and foreign researchers and financially sponsored several gradual students from MIT, Princeton University, Germany, Switzerland, and Finland. A new SciDAC postdoc, Igor Holod, has arrived at UCI to initiate global particle simulation of magnetohydrodynamics turbulence driven by energetic particle modes. The PI, Z. Lin, has been promoted to the Associate Professor with tenure at UCI.« less

  15. SciDAC's Earth System Grid Center for Enabling Technologies Semiannual Progress Report October 1, 2010 through March 31, 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.

    2011-04-02

    This report summarizes work carried out by the Earth System Grid Center for Enabling Technologies (ESG-CET) from October 1, 2010 through March 31, 2011. It discusses ESG-CET highlights for the reporting period, overall progress, period goals, and collaborations, and lists papers and presentations. To learn more about our project and to find previous reports, please visit the ESG-CET Web sites: http://esg-pcmdi.llnl.gov/ and/or https://wiki.ucar.edu/display/esgcet/Home. This report will be forwarded to managers in the Department of Energy (DOE) Scientific Discovery through Advanced Computing (SciDAC) program and the Office of Biological and Environmental Research (OBER), as well as national and international collaborators andmore » stakeholders (e.g., those involved in the Coupled Model Intercomparison Project, phase 5 (CMIP5) for the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5); the Community Earth System Model (CESM); the Climate Science Computational End Station (CCES); SciDAC II: A Scalable and Extensible Earth System Model for Climate Change Science; the North American Regional Climate Change Assessment Program (NARCCAP); the Atmospheric Radiation Measurement (ARM) program; the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA)), and also to researchers working on a variety of other climate model and observation evaluation activities. The ESG-CET executive committee consists of Dean N. Williams, Lawrence Livermore National Laboratory (LLNL); Ian Foster, Argonne National Laboratory (ANL); and Don Middleton, National Center for Atmospheric Research (NCAR). The ESG-CET team is a group of researchers and scientists with diverse domain knowledge, whose home institutions include eight laboratories and two universities: ANL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), LLNL, NASA/Jet Propulsion Laboratory (JPL), NCAR, Oak Ridge National Laboratory (ORNL), Pacific Marine Environmental Laboratory (PMEL)/NOAA, Rensselaer Polytechnic Institute (RPI), and University of Southern California, Information Sciences Institute (USC/ISI). All ESG-CET work is accomplished under DOE open-source guidelines and in close collaboration with the project's stakeholders, domain researchers, and scientists. Through the ESG project, the ESG-CET team has developed and delivered a production environment for climate data from multiple climate model sources (e.g., CMIP (IPCC), CESM, ocean model data (e.g., Parallel Ocean Program), observation data (e.g., Atmospheric Infrared Sounder, Microwave Limb Sounder), and analysis and visualization tools) that serves a worldwide climate research community. Data holdings are distributed across multiple sites including LANL, LBNL, LLNL, NCAR, and ORNL as well as unfunded partners sites such as the Australian National University (ANU) National Computational Infrastructure (NCI), the British Atmospheric Data Center (BADC), the Geophysical Fluid Dynamics Laboratory/NOAA, the Max Planck Institute for Meteorology (MPI-M), the German Climate Computing Centre (DKRZ), and NASA/JPL. As we transition from development activities to production and operations, the ESG-CET team is tasked with making data available to all users who want to understand it, process it, extract value from it, visualize it, and/or communicate it to others. This ongoing effort is extremely large and complex, but it will be incredibly valuable for building 'science gateways' to critical climate resources (such as CESM, CMIP5, ARM, NARCCAP, Atmospheric Infrared Sounder (AIRS), etc.) for processing the next IPCC assessment report. Continued ESG progress will result in a production-scale system that will empower scientists to attempt new and exciting data exchanges, which could ultimately lead to breakthrough climate science discoveries.« less

  16. GW Calculations of Materials on the Intel Xeon-Phi Architecture

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek; Biller, Ariel; Chelikowsky, James R.; Louie, Steven G.

    Intel Xeon-Phi processors are expected to power a large number of High-Performance Computing (HPC) systems around the United States and the world in the near future. We evaluate the ability of GW and pre-requisite Density Functional Theory (DFT) calculations for materials on utilizing the Xeon-Phi architecture. We describe the optimization process and performance improvements achieved. We find that the GW method, like other higher level Many-Body methods beyond standard local/semilocal approximations to Kohn-Sham DFT, is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-waves, band-pairs and frequencies. Support provided by the SCIDAC program, Department of Energy, Office of Science, Advanced Scientic Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-AC02-05CH11231 (LBNL).

  17. Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert D.

    2006-08-10

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less

  18. ESnet authentication services and trust federations

    NASA Astrophysics Data System (ADS)

    Muruganantham, Dhivakaran; Helm, Mike; Genovese, Tony

    2005-01-01

    ESnet provides authentication services and trust federation support for SciDAC projects, collaboratories, and other distributed computing applications. The ESnet ATF team operates the DOEGrids Certificate Authority, available to all DOE Office of Science programs, plus several custom CAs, including one for the National Fusion Collaboratory and one for NERSC. The secure hardware and software environment developed to support CAs is suitable for supporting additional custom authentication and authorization applications that your program might require. Seamless, secure interoperation across organizational and international boundaries is vital to collaborative science. We are fostering the development of international PKI federations by founding the TAGPMA, the American regional PMA, and the worldwide IGTF Policy Management Authority (PMA), as well as participating in European and Asian regional PMAs. We are investigating and prototyping distributed authentication technology that will allow us to support the "roaming scientist" (distributed wireless via eduroam), as well as more secure authentication methods (one-time password tokens).

  19. Final Report for "Tech-X Corporation work for the SciDAC Center for Simulation of RF Wave Interactions with Magnetohydrodynamics (SWIM)"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, Thomas G.; Kruger, Scott E.

    Work carried out by Tech-X Corporation for the DoE SciDAC Center for Simulation of RF Wave Interactions with Magnetohydrodynamics (SWIM; U.S. DoE Office of Science Award Number DE-FC02-06ER54899) is summarized and is shown to fulfil the project objectives. The Tech-X portion of the SWIM work focused on the development of analytic and computational approaches to study neoclassical tearing modes and their interaction with injected electron cyclotron current drive. Using formalism developed by Hegna, Callen, and Ramos [Phys. Plasmas 16, 112501 (2009); Phys. Plasmas 17, 082502 (2010); Phys. Plasmas 18, 102506 (2011)], analytic approximations for the RF interaction were derived andmore » the numerical methods needed to implement these interactions in the NIMROD extended MHD code were developed. Using the SWIM IPS framework, NIMROD has successfully coupled to GENRAY, an RF ray tracing code; additionally, a numerical control system to trigger the RF injection, adjustment, and shutdown in response to tearing mode activity has been developed. We discuss these accomplishments, as well as prospects for ongoing future research that this work has enabled (which continue in a limited fashion under the SciDAC Center for Extended Magnetohydrodynamic Modeling (CEMM) project and under a baseline theory grant). Associated conference presentations, published articles, and publications in progress are also listed.« less

  20. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  1. Developing Models for Predictive Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drake, John B; Jones, Philip W

    2007-01-01

    The Community Climate System Model results from a multi-agency collaboration designed to construct cutting-edge climate science simulation models for a broad research community. Predictive climate simulations are currently being prepared for the petascale computers of the near future. Modeling capabilities are continuously being improved in order to provide better answers to critical questions about Earth's climate. Climate change and its implications are front page news in today's world. Could global warming be responsible for the July 2006 heat waves in Europe and the United States? Should more resources be devoted to preparing for an increase in the frequency of strongmore » tropical storms and hurricanes like Katrina? Will coastal cities be flooded due to a rise in sea level? The National Climatic Data Center (NCDC), which archives all weather data for the nation, reports that global surface temperatures have increased over the last century, and that the rate of increase is three times greater since 1976. Will temperatures continue to climb at this rate, will they decline again, or will the rate of increase become even steeper? To address such a flurry of questions, scientists must adopt a systematic approach and develop a predictive framework. With responsibility for advising on energy and technology strategies, the DOE is dedicated to advancing climate research in order to elucidate the causes of climate change, including the role of carbon loading from fossil fuel use. Thus, climate science--which by nature involves advanced computing technology and methods--has been the focus of a number of DOE's SciDAC research projects. Dr. John Drake (ORNL) and Dr. Philip Jones (LANL) served as principal investigators on the SciDAC project, 'Collaborative Design and Development of the Community Climate System Model for Terascale Computers.' The Community Climate System Model (CCSM) is a fully-coupled global system that provides state-of-the-art computer simulations of the Earth's past, present, and future climate states. The collaborative SciDAC team--including over a dozen researchers at institutions around the country--developed, validated, documented, and optimized the performance of CCSM using the latest software engineering approaches, computational technology, and scientific knowledge. Many of the factors that must be accounted for in a comprehensive model of the climate system are illustrated in figure 1.« less

  2. Final Report for DOE Project: Portal Web Services: Support of DOE SciDAC Collaboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mary Thomas, PI; Geoffrey Fox, Co-PI; Gannon, D

    2007-10-01

    Grid portals provide the scientific community with familiar and simplified interfaces to the Grid and Grid services, and it is important to deploy grid portals onto the SciDAC grids and collaboratories. The goal of this project is the research, development and deployment of interoperable portal and web services that can be used on SciDAC National Collaboratory grids. This project has four primary task areas: development of portal systems; management of data collections; DOE science application integration; and development of web and grid services in support of the above activities.

  3. Final Report for "Implimentation and Evaluation of Multigrid Linear Solvers into Extended Magnetohydrodynamic Codes for Petascale Computing"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srinath Vadlamani; Scott Kruger; Travis Austin

    Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems.more » For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.« less

  4. Scalable real space pseudopotential density functional codes for materials in the exascale regime

    NASA Astrophysics Data System (ADS)

    Lena, Charles; Chelikowsky, James; Schofield, Grady; Biller, Ariel; Kronik, Leeor; Saad, Yousef; Deslippe, Jack

    Real-space pseudopotential density functional theory has proven to be an efficient method for computing the properties of matter in many different states and geometries, including liquids, wires, slabs, and clusters with and without spin polarization. Fully self-consistent solutions using this approach have been routinely obtained for systems with thousands of atoms. Yet, there are many systems of notable larger sizes where quantum mechanical accuracy is desired, but scalability proves to be a hindrance. Such systems include large biological molecules, complex nanostructures, or mismatched interfaces. We will present an overview of our new massively parallel algorithms, which offer improved scalability in preparation for exascale supercomputing. We will illustrate these algorithms by considering the electronic structure of a Si nanocrystal exceeding 104 atoms. Support provided by the SciDAC program, Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-FG02-12ER4 (Berkeley).

  5. Implementation of polyatomic MCTDHF capability

    NASA Astrophysics Data System (ADS)

    Haxton, Daniel; Jones, Jeremiah; Rescigno, Thomas; McCurdy, C. William; Ibrahim, Khaled; Williams, Sam; Vecharynski, Eugene; Rouet, Francois-Henry; Li, Xiaoye; Yang, Chao

    2015-05-01

    The implementation of the Multiconfiguration Time-Dependent Hartree-Fock method for poly- atomic molecules using a cartesian product grid of sinc basis functions will be discussed. The focus will be on two key components of the method: first, the use of a resolution-of-the-identity approximation; sec- ond, the use of established techniques for triple Toeplitz matrix algebra using fast Fourier transform over distributed memory architectures (MPI 3D FFT). The scaling of two-electron matrix element transformations is converted from O(N4) to O(N log N) by including these components. Here N = n3, with n the number of points on a side. We test the prelim- inary implementation by calculating absorption spectra of small hydro- carbons, using approximately 16-512 points on a side. This work is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under the Early Career program, and by the offices of BES and Advanced Scientific Computing Research, under the SciDAC program.

  6. SciDAC GSEP: Gyrokinetic Simulation of Energetic Particle Turbulence and Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhihong

    Energetic particle (EP) confinement is a key physics issue for burning plasma experiment ITER, the crucial next step in the quest for clean and abundant energy, since ignition relies on self-heating by energetic fusion products (α-particles). Due to the strong coupling of EP with burning thermal plasmas, plasma confinement property in the ignition regime is one of the most uncertain factors when extrapolating from existing fusion devices to the ITER tokamak. EP population in current tokamaks are mostly produced by auxiliary heating such as neutral beam injection (NBI) and radio frequency (RF) heating. Remarkable progress in developing comprehensive EP simulationmore » codes and understanding basic EP physics has been made by two concurrent SciDAC EP projects GSEP funded by the Department of Energy (DOE) Office of Fusion Energy Science (OFES), which have successfully established gyrokinetic turbulence simulation as a necessary paradigm shift for studying the EP confinement in burning plasmas. Verification and validation have rapidly advanced through close collaborations between simulation, theory, and experiment. Furthermore, productive collaborations with computational scientists have enabled EP simulation codes to effectively utilize current petascale computers and emerging exascale computers. We review here key physics progress in the GSEP projects regarding verification and validation of gyrokinetic simulations, nonlinear EP physics, EP coupling with thermal plasmas, and reduced EP transport models. Advances in high performance computing through collaborations with computational scientists that enable these large scale electromagnetic simulations are also highlighted. These results have been widely disseminated in numerous peer-reviewed publications including many Phys. Rev. Lett. papers and many invited presentations at prominent fusion conferences such as the biennial International Atomic Energy Agency (IAEA) Fusion Energy Conference and the annual meeting of the American Physics Society, Division of Plasma Physics (APS-DPP).« less

  7. Simulations of Turbulent Flows with Strong Shocks and Density Variations: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanjiva Lele

    2012-10-01

    The target of this SciDAC Science Application was to develop a new capability based on high-order and high-resolution schemes to simulate shock-turbulence interactions and multi-material mixing in planar and spherical geometries, and to study Rayleigh-Taylor and Richtmyer-Meshkov turbulent mixing. These fundamental problems have direct application in high-speed engineering flows, such as inertial confinement fusion (ICF) capsule implosions and scramjet combustion, and also in the natural occurrence of supernovae explosions. Another component of this project was the development of subgrid-scale (SGS) models for large-eddy simulations of flows involving shock-turbulence interaction and multi-material mixing, that were to be validated with the DNSmore » databases generated during the program. The numerical codes developed are designed for massively-parallel computer architectures, ensuring good scaling performance. Their algorithms were validated by means of a sequence of benchmark problems. The original multi-stage plan for this five-year project included the following milestones: 1) refinement of numerical algorithms for application to the shock-turbulence interaction problem and multi-material mixing (years 1-2); 2) direct numerical simulations (DNS) of canonical shock-turbulence interaction (years 2-3), targeted at improving our understanding of the physics behind the combined two phenomena and also at guiding the development of SGS models; 3) large-eddy simulations (LES) of shock-turbulence interaction (years 3-5), improving SGS models based on the DNS obtained in the previous phase; 4) DNS of planar/spherical RM multi-material mixing (years 3-5), also with the two-fold objective of gaining insight into the relevant physics of this instability and aiding in devising new modeling strategies for multi-material mixing; 5) LES of planar/spherical RM mixing (years 4-5), integrating the improved SGS and multi-material models developed in stages 3 and 5. This final report is outlined as follows. Section 2 shows an assessment of numerical algorithms that are best suited for the numerical simulation of compressible flows involving turbulence and shock phenomena. Sections 3 and 4 deal with the canonical shock-turbulence interaction problem, from the DNS and LES perspectives, respectively. Section 5 considers the shock-turbulence inter-action in spherical geometry, in particular, the interaction of a converging shock with isotropic turbulence as well as the problem of the blast wave. Section 6 describes the study of shock-accelerated mixing through planar and spherical Richtmyer-Meshkov mixing as well as the shock-curtain interaction problem In section 7 we acknowledge the different interactions between Stanford and other institutions participating in this SciDAC project, as well as several external collaborations made possible through it. Section 8 presents a list of publications and presentations that have been generated during the course of this SciDAC project. Finally, section 9 concludes this report with the list of personnel at Stanford University funded by this SciDAC project.« less

  8. Deterministic alternatives to the full configuration interaction quantum Monte Carlo method for strongly correlated systems

    NASA Astrophysics Data System (ADS)

    Tubman, Norm; Whaley, Birgitta

    The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.

  9. Domain Wall Fermion Inverter on Pentium 4

    NASA Astrophysics Data System (ADS)

    Pochinsky, Andrew

    2005-03-01

    A highly optimized domain wall fermion inverter has been developed as part of the SciDAC lattice initiative. By designing the code to minimize memory bus traffic, it achieves high cache reuse and performance in excess of 2 GFlops for out of L2 cache problem sizes on a GigE cluster with 2.66 GHz Xeon processors. The code uses the SciDAC QMP communication library.

  10. Progress towards an effective model for FeSe from high-accuracy first-principles quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Busemeyer, Brian; Wagner, Lucas K.

    While the origin of superconductivity in the iron-based materials is still controversial, the proximity of the superconductivity to magnetic order is suggestive that magnetism may be important. Our previous work has suggested that first-principles Diffusion Monte Carlo (FN-DMC) can capture magnetic properties of iron-based superconductors that density functional theory (DFT) misses, but which are consistent with experiment. We report on the progress of efforts to find simple effective models consistent with the FN-DMC description of the low-lying Hilbert space of the iron-based superconductor, FeSe. We utilize a procedure outlined by Changlani et al.[1], which both produces parameter values and indications of whether the model is a good description of the first-principles Hamiltonian. Using this procedure, we evaluate several models of the magnetic part of the Hilbert space found in the literature, as well as the Hubbard model, and a spin-fermion model. We discuss which interaction parameters are important for this material, and how the material-specific properties give rise to these interactions. U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award No. FG02-12ER46875, as well as the NSF Graduate Research Fellowship Program.

  11. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  12. The Impact of the Nuclear Equation of State in Core Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Baird, M. L.; Lentz, E. J.; Hix, W. R.; Mezzacappa, A.; Messer, O. E. B.; Liebendoerfer, M.; TeraScale Supernova Initiative Collaboration

    2005-12-01

    One of the key ingredients to the core collapse supernova mechanism is the physics of matter at or near nuclear density. Included in simulations as part of the Equation of State (EOS), nuclear repulsion experienced at high densities are responsible for the bounce shock, which initially causes the outer envelope of the supernova to expand, as well as determining the structure of the newly formed proto-neutron star. Recent years have seen renewed interest in this fundamental piece of supernova physics, resulting in several promising candidate EOS parameterizations. We will present the impact of these variations in the nuclear EOS using spherically symmetric, Newtonian and General Relativistic neutrino transport simulations of stellar core collapse and bounce. This work is supported in part by SciDAC grants to the TeraScale Supernovae Initiative from the DOE Office of Science High Energy, Nuclear, and Advanced Scientific Computing Research Programs. Oak Ridge National Laboratory is managed by UT-Battelle, LLC, for U.S. Department of Energy under contract DEAC05-00OR22725

  13. Final Report for DOE Grant DE-FG02-03ER25579; Development of High-Order Accurate Interface Tracking Algorithms and Improved Constitutive Models for Problems in Continuum Mechanics with Applications to Jetting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puckett, Elbridge Gerry; Miller, Gregory Hale

    Much of the work conducted under the auspices of DE-FG02-03ER25579 was characterized by an exceptionally close collaboration with researchers at the Lawrence Berkeley National Laboratory (LBNL). For example, Andy Nonaka, one of Professor Miller's graduate students in the Department of Applied Science at U. C. Davis (UCD) wrote his PhD thesis in an area of interest to researchers in the Applied Numerical Algorithms Group (ANAG), which is a part of the National Energy Research Supercomputer Center (NERSC) at LBNL. Dr. Nonaka collaborated closely with these researchers and subsequently published the results of this collaboration jointly with them, one article inmore » a peer reviewed journal article and one paper in the proceedings of a conference. Dr. Nonaka is now a research scientist in the Center for Computational Sciences and Engineering (CCSE), which is also part of the National Energy Research Supercomputer Center (NERSC) at LBNL. This collaboration with researchers at LBNL also included having one of Professor Puckett's graduate students in the Graduate Group in Applied Mathematics (GGAM) at UCD, Sarah Williams, spend the summer working with Dr. Ann Almgren, who is a staff scientist in CCSE. As a result of this visit Sarah decided work on a problem suggested by the head of CCSE, Dr. John Bell, for her PhD thesis. Having finished all of the coursework and examinations required for a PhD, Sarah stayed at LBNL to work on her thesis under the guidance of Dr. Bell. Sarah finished her PhD thesis in June of 2007. Writing a PhD thesis while working at one of the University of California (UC) managed DOE laboratories is long established tradition at UC and Professor Puckett has always encouraged his students to consider doing this. Another one of Professor Puckett's graduate students in the GGAM at UCD, Christopher Algieri, was partially supported with funds from DE-FG02-03ER25579 while he wrote his MS thesis in which he analyzed and extended work originally published by Dr. Phillip Colella, the head of ANAG, and some of his colleagues. Chris Algieri is now employed as a staff member in Dr. Bill Collins' Climate Science Department in the Earth Sciences Division at LBNL working with computational models of climate change. Finally, it should be noted that the work conducted by Professor Puckett and his students Sarah Williams and Chris Algieri and described in this final report for DOE grant # DE-FC02-03ER25579 is closely related to work performed by Professor Puckett and his students under the auspices of Professor Puckett's DOE SciDAC grant DE-FC02-01ER25473 An Algorithmic and Software Framework for Applied Partial Differential Equations: A DOE SciDAC Integrated Software Infrastructure Center (ISIC). Dr. Colella was the lead PI for this SciDAC grant, which was comprised of several research groups from DOE national laboratories and five university PI's from five different universities. In theory Professor Puckett tried to use funds from the SciDAC grant to support work directly involved in implementing algorithms developed by members of his research group at UCD as software that might be of use to Puckett's SciDAC CoPIs. (For example, see the work reported in Section 2.2.2 of this final report.) However, since there is considerable lead time spent developing such algorithms before they are ready to become `software' and research plans and goals change as the research progresses, Professor Puckett supported each member of his research group partially with funds from the SciDAC APDEC ISIC DE-FC02-01ER25473 and partially with funds from this DOE MICS grant DE-FC02-03ER25579. This has necessarily resulted in a significant overlap of project areas that were funded by both grants. In particular, both Sarah Williams and Chris Algieri were supported partially with funds from grant # DE-FG02-03ER25579, for which this is the final report, and in part with funds from Professor Puckett's DOE SciDAC grant # DE-FC02-01ER25473. For example, Sarah Williams received support from DE-FC02- 01ER25473 and DE-FC02-03ER25579, both while at UCD taking classes and writing her MS thesis and during the first year she was living in Berkeley and working at LBNL on her PhD thesis. In Chris Algieri's case he was at UCD during the entire time he received support from both grants. More specific details of their work are included in the report.« less

  14. Iowa State University – Final Report for SciDAC3/NUCLEI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vary, James P

    The Iowa State University (ISU) contributions to the NUCLEI project are focused on developing, implementing and running an efficient and scalable configuration interaction code (Many-Fermion Dynamics – nuclear or MFDn) for leadership class supercomputers addressing forefront research problems in low-energy nuclear physics. We investigate nuclear structure and reactions with realistic nucleon-nucleon (NN) and three-nucleon (3N) interactions. We select a few highlights from our work that has produced a total of more than 82 refereed publications and more than 109 invited talks under SciDAC3/NUCLEI.

  15. Electron Correlation in Oxygen Vacancy in SrTiO3

    NASA Astrophysics Data System (ADS)

    Lin, Chungwei; Demkov, Alexander A.

    2014-03-01

    Oxygen vacancies are an important type of defect in transition metal oxides. In SrTiO3 they are believed to be the main donors in an otherwise intrinsic crystal. At the same time, a relatively deep gap state associated with the vacancy is widely reported. To explain this inconsistency we investigate the effect of electron correlation in an oxygen vacancy (OV) in SrTiO3. When taking correlation into account, we find that the OV-induced localized level can at most trap one electron, while the second electron occupies the conduction band. Our results offer a natural explanation of how the OV in SrTiO3 can produce a deep in-gap level (about 1 eV below the conduction band bottom) in photoemission, and at the same time be an electron donor. Our analysis implies an OV in SrTiO3 should be fundamentally regarded as a magnetic impurity, whose deep level is always partially occupied due to the strong Coulomb repulsion. An OV-based Anderson impurity model is derived, and its implications are discussed. This work was supported by Scientific Discovery through Advanced Computing (SciDAC) program funded by U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences under award number DESC0008877.

  16. Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia

    2011-03-01

    During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.

  17. Optimization and Control of Burning Plasmas Through High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pankin, Alexei

    This project has revived the FACETS code, that has been developed under SciDAC fund- ing in 2008-2012. The code has been dormant for a number of years after the SciDAC funding stopped. FACETS depends on external packages. The external packages and libraries such as PETSc, FFTW, HDF5 and NETCDF that are included in FACETS have evolved during these years. Some packages in FACETS are also parts of other codes such as PlasmaState, NUBEAM, GACODES, and UEDGE. These packages have been also evolved together with their host codes which include TRANSP, TGYRO and XPTOR. Finally, there is also a set ofmore » packages in FACETS that are being developed and maintained by Tech-X. These packages include BILDER, SciMake, and FcioWrappers. Many of these packages evolved significantly during the last several years and FACETS had to be updated to synchronize with the re- cent progress in the external packages. The PI has introduced new changes to the BILDER package to support the updated interfaces to the external modules. During the last year of the project, the FACETS version of the UEDGE code has been extracted from FACETS as a standalone package. The PI collaborates with the scientists from LLNL on the updated UEDGE model in FACETS. Drs. T. Rognlien, M. Umansky and A. Dimits from LLNL are contributing to this task.« less

  18. Accelerating large scale Kohn-Sham density functional theory calculations with semi-local functionals and hybrid functionals

    NASA Astrophysics Data System (ADS)

    Lin, Lin

    The computational cost of standard Kohn-Sham density functional theory (KSDFT) calculations scale cubically with respect to the system size, which limits its use in large scale applications. In recent years, we have developed an alternative procedure called the pole expansion and selected inversion (PEXSI) method. The PEXSI method solves KSDFT without solving any eigenvalue and eigenvector, and directly evaluates physical quantities including electron density, energy, atomic force, density of states, and local density of states. The overall algorithm scales as at most quadratically for all materials including insulators, semiconductors and the difficult metallic systems. The PEXSI method can be efficiently parallelized over 10,000 - 100,000 processors on high performance machines. The PEXSI method has been integrated into a number of community electronic structure software packages such as ATK, BigDFT, CP2K, DGDFT, FHI-aims and SIESTA, and has been used in a number of applications with 2D materials beyond 10,000 atoms. The PEXSI method works for LDA, GGA and meta-GGA functionals. The mathematical structure for hybrid functional KSDFT calculations is significantly different. I will also discuss recent progress on using adaptive compressed exchange method for accelerating hybrid functional calculations. DOE SciDAC Program, DOE CAMERA Program, LBNL LDRD, Sloan Fellowship.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  20. The open science grid

    NASA Astrophysics Data System (ADS)

    Pordes, Ruth; OSG Consortium; Petravick, Don; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Würthwein, Frank; Foster, Ian; Gardner, Rob; Wilde, Mike; Blatecky, Alan; McGee, John; Quick, Rob

    2007-07-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support it's use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.

  1. Gyrokinetic micro-turbulence simulations on the NERSC 16-way SMP IBM SP computer: experiences and performance results

    NASA Astrophysics Data System (ADS)

    Ethier, Stephane; Lin, Zhihong

    2001-10-01

    Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)

  2. Final Technical Report - SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnack, Dalton D.

    Final technical report for research performed by Dr. Thomas G. Jenkins in collaboration with Professor Dalton D. Schnack on SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodyanics, DE-FC02-06ER54899, for the period of 8/15/06 - 8/14/11. This report centers on the Slow MHD physics campaign work performed by Dr. Jenkins while at UW-Madison and then at Tech-X Corporation. To make progress on the problem of RF induced currents affect magnetic island evolution in toroidal plasmas, a set of research approaches are outlined. Three approaches can be addressed in parallel. These are: (1) Analytically prescribed additional term in Ohm's law tomore » model the effect of localized ECCD current drive; (2) Introduce an additional evolution equation for the Ohm's law source term. Establish a RF source 'box' where information from the RF code couples to the fluid evolution; and (3) Carry out a more rigorous analytic calculation treating the additional RF terms in a closure problem. These approaches rely on the necessity of reinvigorating the computation modeling efforts of resistive and neoclassical tearing modes with present day versions of the numerical tools. For the RF community, the relevant action item is - RF ray tracing codes need to be modified so that general three-dimensional spatial information can be obtained. Further, interface efforts between the two codes require work as well as an assessment as to the numerical stability properties of the procedures to be used.« less

  3. Final Report: SciDAC Computational Astrophysics Consortium (at Princeton University)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrows, Adam

    Supernova explosions are the central events in astrophysics. They are the major agencies of change in the interstellar medium, driving star formation and the evolution of galaxies. Their gas remnants are the birthplaces of the cosmic rays. Such is their brightness that they can be used as standard candles to measure the size and geometry of the universe and their investigation draws on particle and nuclear physics, radiative transfer, kinetic theory, gravitational physics, thermodynamics, and the numerical arts. Hence, supernovae are unrivaled astrophysical laboratories. We will develop new state-of-the-art multi-dimensional radiation hydrodynamic codes to address this and other related astrophysicalmore » phenomena.« less

  4. High-Performance I/O: HDF5 for Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav

    2015-01-01

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  5. High-Performance I/O: HDF5 for Lattice QCD

    DOE PAGES

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav; ...

    2017-05-09

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  6. Data management and analysis for the Earth System Grid

    NASA Astrophysics Data System (ADS)

    Williams, D. N.; Ananthakrishnan, R.; Bernholdt, D. E.; Bharathi, S.; Brown, D.; Chen, M.; Chervenak, A. L.; Cinquini, L.; Drach, R.; Foster, I. T.; Fox, P.; Hankin, S.; Henson, V. E.; Jones, P.; Middleton, D. E.; Schwidder, J.; Schweitzer, R.; Schuler, R.; Shoshani, A.; Siebenlist, F.; Sim, A.; Strand, W. G.; Wilhelmi, N.; Su, M.

    2008-07-01

    The international climate community is expected to generate hundreds of petabytes of simulation data within the next five to seven years. This data must be accessed and analyzed by thousands of analysts worldwide in order to provide accurate and timely estimates of the likely impact of climate change on physical, biological, and human systems. Climate change is thus not only a scientific challenge of the first order but also a major technological challenge. In order to address this technological challenge, the Earth System Grid Center for Enabling Technologies (ESG-CET) has been established within the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC)-2 program, with support from the offices of Advanced Scientific Computing Research and Biological and Environmental Research. ESG-CET's mission is to provide climate researchers worldwide with access to the data, information, models, analysis tools, and computational capabilities required to make sense of enormous climate simulation datasets. Its specific goals are to (1) make data more useful to climate researchers by developing Grid technology that enhances data usability; (2) meet specific distributed database, data access, and data movement needs of national and international climate projects; (3) provide a universal and secure web-based data access portal for broad multi-model data collections; and (4) provide a wide-range of Grid-enabled climate data analysis tools and diagnostic methods to international climate centers and U.S. government agencies. Building on the successes of the previous Earth System Grid (ESG) project, which has enabled thousands of researchers to access tens of terabytes of data from a small number of ESG sites, ESG-CET is working to integrate a far larger number of distributed data providers, high-bandwidth wide-area networks, and remote computers in a highly collaborative problem-solving environment.

  7. Towards prediction of correlated material properties using quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas

    Correlated electron systems offer a richness of physics far beyond noninteracting systems. If we would like to pursue the dream of designer correlated materials, or, even to set a more modest goal, to explain in detail the properties and effective physics of known materials, then accurate simulation methods are required. Using modern computational resources, quantum Monte Carlo (QMC) techniques offer a way to directly simulate electron correlations. I will show some recent results on a few extremely challenging materials including the metal-insulator transition of VO2, the ground state of the doped cuprates, and the pressure dependence of magnetic properties in FeSe. By using a relatively simple implementation of QMC, at least some properties of these materials can be described truly from first principles, without any adjustable parameters. Using the QMC platform, we have developed a way of systematically deriving effective lattice models from the simulation. This procedure is particularly attractive for correlated electron systems because the QMC methods treat the one-body and many-body components of the wave function and Hamiltonian on completely equal footing. I will show some examples of using this downfolding technique and the high accuracy of QMC to connect our intuitive ideas about interacting electron systems with high fidelity simulations. The work in this presentation was supported in part by NSF DMR 1206242, the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number FG02-12ER46875, and the Center for Emergent Superconductivity, Department of Energy Frontier Research Center under Grant No. DEAC0298CH1088. Computing resources were provided by a Blue Waters Illinois grant and INCITE PhotSuper and SuperMatSim allocations.

  8. The QCDOC Project

    NASA Astrophysics Data System (ADS)

    Boyle, P.; Chen, D.; Christ, N.; Clark, M.; Cohen, S.; Cristian, C.; Dong, Z.; Gara, A.; Joo, B.; Jung, C.; Kim, C.; Levkova, L.; Liao, X.; Liu, G.; Li, S.; Lin, H.; Mawhinney, R.; Ohta, S.; Petrov, K.; Wettig, T.; Yamaguchi, A.

    2005-03-01

    The QCDOC project has developed a supercomputer optimised for the needs of Lattice QCD simulations. It provides a very competitive price to sustained performance ratio of around $1 USD per sustained Megaflop/s in combination with outstanding scalability. Thus very large systems delivering over 5 TFlop/s of performance on the evolution of a single lattice is possible. Large prototypes have been built and are functioning correctly. The software environment raises the state of the art in such custom supercomputers. It is based on a lean custom node operating system that eliminates many unnecessary overheads that plague other systems. Despite the custom nature, the operating system implements a standards compliant UNIX-like programming environment easing the porting of software from other systems. The SciDAC QMP interface adds internode communication in a fashion that provides a uniform cross-platform programming environment.

  9. ISCR Annual Report: Fical Year 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, J R

    2005-03-03

    Large-scale scientific computation and all of the disciplines that support and help to validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of computational simulation as a tool of scientific and engineering research is underscored in the November 2004 statement of the Secretary of Energy that, ''high performance computing is the backbone of the nation's science and technologymore » enterprise''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use efficiently. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to LLNL's core missions than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In Fiscal Year 2004, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for short- and long-term visits with the aim of encouraging long-term academic research agendas that address LLNL's research priorities. Through such collaborations, ideas and software flow in both directions, and LLNL cultivates its future workforce. The Institute strives to be LLNL's ''eyes and ears'' in the computer and information sciences, keeping the Laboratory aware of and connected to important external advances. It also attempts to be the ''feet and hands'' that carry those advances into the Laboratory and incorporates them into practice. ISCR research participants are integrated into LLNL's Computing and Applied Research (CAR) Department, especially into its Center for Applied Scientific Computing (CASC). In turn, these organizations address computational challenges arising throughout the rest of the Laboratory. Administratively, the ISCR flourishes under LLNL's University Relations Program (URP). Together with the other five institutes of the URP, it navigates a course that allows LLNL to benefit from academic exchanges while preserving national security. While it is difficult to operate an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and worth the continued effort.« less

  10. Toward a first-principles integrated simulation of tokamak edge plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C S; Klasky, Scott A; Cummings, Julian

    2008-01-01

    Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less

  11. GITR Simulation of Helium Exposed Tungsten Erosion and Redistribution in PISCES-A

    NASA Astrophysics Data System (ADS)

    Younkin, T. R.; Green, D. L.; Doerner, R. P.; Nishijima, D.; Drobny, J.; Canik, J. M.; Wirth, B. D.

    2017-10-01

    The extreme heat, charged particle, and neutron flux / fluence to plasma facing materials in magnetically confined fusion devices has motivated research to understand, predict, and mitigate the associated detrimental effects. Of relevance to the ITER divertor is the helium interaction with the tungsten divertor, the resulting erosion and migration of impurities. The linear plasma device PISCES A has performed dedicated experiments for high (4x10-22 m-2s-1) and low (4x10-21 m-2s-1) flux, 250 eV He exposed tungsten targets to assess the net and gross erosion of tungsten and volumetric transport. The temperature of the target was held between 400 and 600 degrees C. We present results of the erosion / migration / re-deposition of W during the experiment from the GITR (Global Impurity Transport) code coupled to materials response models. In particular, the modeled and experimental W I emission spectroscopy data for the 429.4 nm wavelength and net erosion through target and collector mass difference measurements are compared. Overall, the predictions are in good agreement with experiments. This material is supported by the US DOE, Office of Science, Office of Fusion Energy Sciences and Office of Advanced Scientific Computing Research through the SciDAC program on Plasma-Surface Interactions.

  12. Continuum Gyrokinetic Simulations of Turbulence in a Helical Model SOL with NSTX-type parameters

    NASA Astrophysics Data System (ADS)

    Hammett, G. W.; Shi, E. L.; Hakim, A.; Stoltzfus-Dueck, T.

    2017-10-01

    We have developed the Gkeyll code to carry out 3D2V full- F gyrokinetic simulations of electrostatic plasma turbulence in open-field-line geometries, using special versions of discontinuous-Galerkin algorithms to help with the computational challenges of the edge region. (Higher-order algorithms can also be helpful for exascale computing as they reduce the ratio of communications to computations.) Our first simulations with straight field lines were done for LAPD-type cases. Here we extend this to a helical model of an SOL plasma and show results for NSTX-type parameters. These simulations include the basic elements of a scrape-off layer: bad-curvature/interchange drive of instabilities, narrow sources to model plasma leaking from the core, and parallel losses with model sheath boundary conditions (our model allows currents to flow in and out of the walls). The formation of blobs is observed. By reducing the strength of the poloidal magnetic field, the heat flux at the divertor plate is observed to broaden. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  13. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell)more » approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniela Ushizima, Wes Bethel

    Quant-CT is currently a plugin to ImageJ, designed as a Java-class that provides control mechanism for the user to choose volumes of interest within porous material, followed by the selection of image subsamples for automated tuning of parameters for filters and classifiers, and finally measurement of material geometry, porosity, and visualization. Denoising is mandatory before any image interpretation, and we implemented a new 3D java code that performs bilateral filtering of data. Segmentation of the dense material is essential before any quantifications about geological sample structure, and we invented new schemes to deal with over segmentation when using statistical regionmore » merging algorithm to pull out grains that compose imaged material. It make uses of ImageJ API and other standard and thirty-party APIs. Quant-CT conception started in 2011 under Scidac-e sponsor, and details of the first prototype were documented in publications below. While it is used right now for microtomography images, it can potentially be used by anybody with 3D image data obtained by experiment or produced by simulation.« less

  15. GYROKINETIC PARTICLE SIMULATION OF TURBULENT TRANSPORT IN BURNING PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horton, Claude Wendell

    2014-06-10

    The SciDAC project at the IFS advanced the state of high performance computing for turbulent structures and turbulent transport. The team project with Prof Zhihong Lin [PI] at Univ California Irvine produced new understanding of the turbulent electron transport. The simulations were performed at the Texas Advanced Computer Center TACC and the NERSC facility by Wendell Horton, Lee Leonard and the IFS Graduate Students working in that group. The research included a Validation of the electron turbulent transport code using the data from a steady state university experiment at the University of Columbia in which detailed probe measurements of themore » turbulence in steady state were used for wide range of temperature gradients to compare with the simulation data. These results were published in a joint paper with Texas graduate student Dr. Xiangrong Fu using the work in his PhD dissertation. X.R. Fu, W. Horton, Y. Xiao, Z. Lin, A.K. Sen and V. Sokolov, “Validation of electron Temperature gradient turbulence in the Columbia Linear Machine, Phys. Plasmas 19, 032303 (2012).« less

  16. Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trebotich, D

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less

  17. Modeling complex biological flows in multi-scale systems using the APDEC framework

    NASA Astrophysics Data System (ADS)

    Trebotich, David

    2006-09-01

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.

  18. ISCR FY2005 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D E; McGraw, J R

    2006-02-02

    Large-scale scientific computation and all of the disciplines that support and help validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of simulation as a fundamental tool of scientific and engineering research is underscored in the President's Information Technology Advisory Committee (PITAC) June 2005 finding that ''computational science has become critical to scientific leadership, economic competitiveness, and nationalmore » security''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed, most notably the molecular dynamics simulation that sustained more than 100 Teraflop/s and won the 2005 Gordon Bell Prize. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use in an efficient manner. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to the core missions of LLNL than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In FY 2005, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for both brief and extended visits with the aim of encouraging long-term academic research agendas that address LLNL research priorities. Through these collaborations, ideas and software flow in both directions, and LLNL cultivates its future workforce. The Institute strives to be LLNL's ''eyes and ears'' in the computer and information sciences, keeping the Laboratory aware of and connected to important external advances. It also attempts to be the ''hands and feet'' that carry those advances into the Laboratory and incorporate them into practice. ISCR research participants are integrated into LLNL's Computing Applications and Research (CAR) Department, especially into its Center for Applied Scientific Computing (CASC). In turn, these organizations address computational challenges arising throughout the rest of the Laboratory. Administratively, the ISCR flourishes under LLNL's University Relations Program (URP). Together with the other four institutes of the URP, the ISCR navigates a course that allows LLNL to benefit from academic exchanges while preserving national security. While it is difficult to operate an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and worth the continued effort. The pages of this annual report summarize the activities of the faculty members, postdoctoral researchers, students, and guests from industry and other laboratories who participated in LLNL's computational mission under the auspices of the ISCR during FY 2005.« less

  19. Finding Effective Models in Transition Metals using Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Williams, Kiel; Wagner, Lucas K.

    There is a gap between high-accuracy ab-initio calculations, like those produced from Quantum Monte Carlo (QMC), and effective lattice models such as the Hubbard model. We have developed a method that combines data produced from QMC with fitting techniques taken from data science, allowing us to determine which degrees of freedom are required to connect ab-initio and model calculations. We test this approach for transition metal atoms, where spectroscopic reference data exists. We report on the accuracy of several derived effective models that include different degrees of freedom, and comment on the quality of the parameter values we obtain from our fitting procedure. We gratefully acknowledge funding from the National Science Foundation Graduate Research Fellowship Program under Grant Number DGE-1144245 (K.T.W.) and from SciDAC Grant DE-FG02-12ER46875 (L.K.W.).

  20. Scientific Discovery through Advanced Computing in Plasma Science

    NASA Astrophysics Data System (ADS)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.

  1. Object Kinetic Monte Carlo Simulations of Radiation Damage In Bulk Tungsten

    NASA Astrophysics Data System (ADS)

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard; Roche, Kenneth; Kurtz, Richard; Wirth, Brian

    2015-11-01

    Results are presented for the evolution of radiation damage in bulk tungsten investigated using the object KMC simulation tool, KSOME, as a function of dose, dose rate and primary knock-on atom (PKA) energies in the range of 10 to 100 keV, at temperatures of 300, 1025 and 2050 K. At 300 K, the number density of vacancies changes minimally with dose rate while the number density of vacancy clusters slightly decreases with dose rate indicating that larger clusters are formed at higher dose rates. Although the average vacancy cluster size increases slightly, the vast majority exists as mono-vacancies. At 1025 K void lattice formation was observed at all dose rates for cascades below 60 keV and at lower dose rates for higher PKA energies. After the appearance of initial features of the void lattice, vacancy cluster density increased minimally while the average vacancy cluster size increases rapidly with dose. At 2050 K, no accumulation of defects was observed over a broad range of dose rates for all PKA energies studied in this work. Further comparisons of results of irradiation simulations at various dose rates and PKA spectra, representative of the High Flux Isotope Reactor and future fusion relevant irradiation facilities will be discussed. The U.S. Department of Energy, Office of Fusion Energy Sciences (FES) and Office of Advanced Scientific Computing Research (ASCR) has supported this study through the SciDAC-3 program.

  2. Recent advances in the modeling of plasmas with the Particle-In-Cell methods

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv

    2015-11-01

    The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ucilia

    This report has the following articles: (1) Deconstructing Microbes--metagenomic research on bugs in termites relies on new data analysis tools; (2) Popular Science--a nanomaterial research paper in Nano Letters drew strong interest from the scientific community; (3) Direct Approach--researchers employ an algorithm to solve an energy-reduction issue essential in describing complex physical system; and (4) SciDAC Special--A science journal features research on petascale enabling technologies.

  4. Knowledge-Based Parallel Performance Technology for Scientific Application Competitiveness Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malony, Allen D; Shende, Sameer

    The primary goal of the University of Oregon's DOE "œcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translationmore » of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.« less

  5. Advances in continuum kinetic and gyrokinetic simulations of turbulence on open-field line geometries

    NASA Astrophysics Data System (ADS)

    Hakim, Ammar; Shi, Eric; Juno, James; Bernard, Tess; Hammett, Greg

    2017-10-01

    For weakly collisional (or collisionless) plasmas, kinetic effects are required to capture the physics of micro-turbulence. We have implemented solvers for kinetic and gyrokinetic equations in the computational plasma physics framework, Gkeyll. We use a version of discontinuous Galerkin scheme that conserves energy exactly. Plasma sheaths are modeled with novel boundary conditions. Positivity of distribution functions is maintained via a reconstruction method, allowing robust simulations that continue to conserve energy even with positivity limiters. We have performed a large number of benchmarks, verifying the accuracy and robustness of our code. We demonstrate the application of our algorithm to two classes of problems (a) Vlasov-Maxwell simulations of turbulence in a magnetized plasma, applicable to space plasmas; (b) Gyrokinetic simulations of turbulence in open-field-line geometries, applicable to laboratory plasmas. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  6. Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.

    2004-11-01

    The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.

  7. Center for Extended Magnetohydrodynamic Modeling Cooperative Agreement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carl R. Sovinec

    The Center for Extended Magnetohydrodynamic Modeling (CEMM) is developing computer simulation models for predicting the behavior of magnetically confined plasmas. Over the first phase of support from the Department of Energy’s Scientific Discovery through Advanced Computing (SciDAC) initiative, the focus has been on macroscopic dynamics that alter the confinement properties of magnetic field configurations. The ultimate objective is to provide computational capabilities to predict plasma behavior—not unlike computational weather prediction—to optimize performance and to increase the reliability of magnetic confinement for fusion energy. Numerical modeling aids theoretical research by solving complicated mathematical models of plasma behavior including strong nonlinear effectsmore » and the influences of geometrical shaping of actual experiments. The numerical modeling itself remains an area of active research, due to challenges associated with simulating multiple temporal and spatial scales. The research summarized in this report spans computational and physical topics associated with state of the art simulation of magnetized plasmas. The tasks performed for this grant are categorized according to whether they are primarily computational, algorithmic, or application-oriented in nature. All involve the development and use of the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, which is described at http://nimrodteam.org. With respect to computation, we have tested and refined methods for solving the large algebraic systems of equations that result from our numerical approximations of the physical model. Collaboration with the Terascale Optimal PDE Solvers (TOPS) SciDAC center led us to the SuperLU_DIST software library [http://crd.lbl.gov/~xiaoye/SuperLU/] for solving large sparse matrices using direct methods on parallel computers. Switching to this solver library boosted NIMROD’s performance by a factor of five in typical large nonlinear simulations, which has been publicized as a success story of SciDAC-fostered collaboration. Furthermore, the SuperLU software does not assume any mathematical symmetry, and its generality provides an important capability for extending the physical model beyond magnetohydrodynamics (MHD). With respect to algorithmic and model development, our most significant accomplishment is the development of a new method for solving plasma models that treat electrons as an independent plasma component. These ‘two-fluid’ models encompass MHD and add temporal and spatial scales that are beyond the response of the ion species. Implementation and testing of a previously published algorithm did not prove successful for NIMROD, and the new algorithm has since been devised, analyzed, and implemented. Two-fluid modeling, an important objective of the original NIMROD project, is now routine in 2D applications. Algorithmic components for 3D modeling are in place and tested; though, further computational work is still needed for efficiency. Other algorithmic work extends the ion-fluid stress tensor to include models for parallel and gyroviscous stresses. In addition, our hot-particle simulation capability received important refinements that permitted completion of a benchmark with the M3D code. A highlight of our applications work is the edge-localized mode (ELM) modeling, which was part of the first-ever computational Performance Target for the DOE Office of Fusion Energy Science, see http://www.science.doe.gov/ofes/performancetargets.shtml. Our efforts allowed MHD simulations to progress late into the nonlinear stage, where energy is conducted to the wall location. They also produced a two-fluid ELM simulation starting from experimental information and demonstrating critical drift effects that are characteristic of two-fluid physics. Another important application is the internal kink mode in a tokamak. Here, the primary purpose of the study has been to benchmark the two main code development lines of CEMM, NIMROD and M3D, on a relevant nonlinear problem. Results from the two codes show repeating nonlinear relaxation events driven by the kink mode over quantitatively comparable timescales. The work has launched a more comprehensive nonlinear benchmarking exercise, where realistic transport effects have an important role.« less

  8. Verification of nonlinear particle simulation of radio frequency waves in fusion plasmas

    NASA Astrophysics Data System (ADS)

    Kuley, Animesh; Bao, Jian; Lin, Zhihong

    2015-11-01

    Nonlinear global particle simulation model has been developed in GTC to study the nonlinear interactions of radio frequency (RF) waves with plasmas in tokamak. In this model, ions are considered as fully kinetic particles using the Vlasov equation and electrons are treated as guiding centers using the drift kinetic. Boris push scheme for the ion motion has been implemented in the toroidal geometry using magnetic coordinates and successfully verified for the ion cyclotron, ion Bernstein and lower hybrid waves. The nonlinear GTC simulation of the lower hybrid wave shows that the amplitude of the electrostatic potential is oscillatory due to the trapping of resonant electrons by the electric field of the lower hybrid wave. The nonresonant parametric decay is observed an IBW sideband and an ion cyclotron quasimode (ICQM). The ICQM induces an ion perpendicular heating with a heating rate proportional to the pump wave intensity. This work is supported by PPPL subcontract number S013849-F and US Department of Energy (DOE) SciDAC GSEP Program.

  9. VisIt: An End-User Tool for Visualizing and Analyzing Very Large Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, Hank; Brugger, Eric; Whitlock, Brad

    2012-11-01

    VisIt is a popular open source tool for visualizing and analyzing big data. It owes its success to its foci of increasing data understanding, large data support, and providing a robust and usable product, as well as its underlying design that fits today's supercomputing landscape. This report, which draws heavily from an earlier publication at the SciDAC Conference in 2011 describes the VisIt project and its accomplishments.

  10. Features of Discontinuous Galerkin Algorithms in Gkeyll, and Exponentially-Weighted Basis Functions

    NASA Astrophysics Data System (ADS)

    Hammett, G. W.; Hakim, A.; Shi, E. L.

    2016-10-01

    There are various versions of Discontinuous Galerkin (DG) algorithms that have interesting features that could help with challenging problems of higher-dimensional kinetic problems (such as edge turbulence in tokamaks and stellarators). We are developing the gyrokinetic code Gkeyll based on DG methods. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communication costs (which are a bottleneck for exascale computing). The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which alternatively can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature employed in popular δf continuum gyrokinetic codes. We show some tests for a 1D Spitzer-Härm heat flux problem, which requires good resolution for the tail. For two velocity dimensions, this approach could lead to a factor of 10 or more speedup. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  11. ECCD-induced tearing mode stabilization in coupled IPS/NIMROD/GENRAY HPC simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Kruger, S. E.; Held, E. D.; Harvey, R. W.; Elwasif, W. R.; Schnack, D. D.; SWIM Project Team

    2011-10-01

    We present developments toward an integrated, predictive model for determining optimal ECCD-based NTM stabilization strategies in ITER. We demonstrate the capability of the SWIM Project's Integrated Plasma Simulator (IPS) framework to choreograph multiple executions of, and data exchanges between, physics codes modeling various spatiotemporal scales of this coupled RF/MHD problem on several thousand HPC processors. As NIMROD evolves fluid equations to model bulk plasma behavior, self-consistent propagation/deposition of RF power in the ensuing plasma profiles is calculated by GENRAY. A third code (QLCALC) then interfaces with computational geometry packages to construct the RF-induced quasilinear diffusion tensor from NIMROD/GENRAY data, and the moments of this tensor (entering as additional terms in NIMROD's fluid equations due to the disparity in RF/MHD spatiotemporal scales) influence the dynamics of current, momentum, and energy evolution. Initial results are shown to correctly capture the physics of magnetic island stabilization [Jenkins et al., PoP 17, 012502 (2010)]; we also discuss the development of a numerical plasma control system for active feedback stabilization of tearing modes. Funded by USDoE SciDAC.

  12. UCLA Final Technical Report for the "Community Petascale Project for Accelerator Science and Simulation”.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mori, Warren

    The UCLA Plasma Simulation Group is a major partner of the “Community Petascale Project for Accelerator Science and Simulation”. This is the final technical report. We include an overall summary, a list of publications, progress for the most recent year, and individual progress reports for each year. We have made tremendous progress during the three years. SciDAC funds have contributed to the development of a large number of skeleton codes that illustrate how to write PIC codes with a hierarchy of parallelism. These codes cover 2D and 3D as well as electrostatic solvers (which are used in beam dynamics codesmore » and quasi-static codes) and electromagnetic solvers (which are used in plasma based accelerator codes). We also used these ideas to develop a GPU enabled version of OSIRIS. SciDAC funds were also contributed to the development of strategies to eliminate the Numerical Cerenkov Instability (NCI) which is an issue when carrying laser wakefield accelerator (LWFA) simulations in a boosted frame and when quantifying the emittance and energy spread of self-injected electron beams. This work included the development of a new code called UPIC-EMMA which is an FFT based electromagnetic PIC code and to new hybrid algorithms in OSIRIS. A new hybrid (PIC in r-z and gridless in φ) algorithm was implemented into OSIRIS. In this algorithm the fields and current are expanded into azimuthal harmonics and the complex amplitude for each harmonic is calculated separately. The contributions from each harmonic are summed and then used to push the particles. This algorithm permits modeling plasma based acceleration with some 3D effects but with the computational load of an 2D r-z PIC code. We developed a rigorously charge conserving current deposit for this algorithm. Very recently, we made progress in combining the speed up from the quasi-3D algorithm with that from the Lorentz boosted frame. SciDAC funds also contributed to the improvement and speed up of the quasi-static PIC code QuickPIC. We have also used our suite of PIC codes to make scientific discovery. Highlights include supporting FACET experiments which achieved the milestones of showing high beam loading and energy transfer efficiency from a drive electron beam to a witness electron beam and the discovery of a self-loading regime a for high gradient acceleration of a positron beam. Both of these experimental milestones were published in Nature together with supporting QuickPIC simulation results. Simulation results from QuickPIC were used on the cover of Nature in one case. We are also making progress on using highly resolved QuickPIC simulations to show that ion motion may not lead to catastrophic emittance growth for tightly focused electron bunches loaded into nonlinear wakefields. This could mean that fully self-consistent beam loading scenarios are possible. This work remains in progress. OSIRIS simulations were used to discover how 200 MeV electron rings are formed in LWFA experiments, on how to generate electrons that have a series of bunches on nanometer scale, and how to transport electron beams from (into) plasma sections into (from) conventional beam optic sections.« less

  13. High-resolution coupled ice sheet-ocean modeling using the POPSICLES model

    NASA Astrophysics Data System (ADS)

    Ng, E. G.; Martin, D. F.; Asay-Davis, X.; Price, S. F.; Collins, W.

    2014-12-01

    It is expected that a primary driver of future change of the Antarctic ice sheet will be changes in submarine melting driven by incursions of warm ocean water into sub-ice shelf cavities. Correctly modeling this response on a continental scale will require high-resolution modeling of the coupled ice-ocean system. We describe the computational and modeling challenges in our simulations of the full Southern Ocean coupled to a continental-scale Antarctic ice sheet model at unprecedented spatial resolutions (0.1 degree for the ocean model and adaptive mesh refinement down to 500m in the ice sheet model). The POPSICLES model couples the POP2x ocean model, a modified version of the Parallel Ocean Program (Smith and Gent, 2002), with the BISICLES ice-sheet model (Cornford et al., 2012) using a synchronous offline-coupling scheme. Part of the PISCEES SciDAC project and built on the Chombo framework, BISICLES makes use of adaptive mesh refinement to fully resolve dynamically-important regions like grounding lines and employs a momentum balance similar to the vertically-integrated formulation of Schoof and Hindmarsh (2009). Results of BISICLES simulations have compared favorably to comparable simulations with a Stokes momentum balance in both idealized tests like MISMIP3D (Pattyn et al., 2013) and realistic configurations (Favier et al. 2014). POP2x includes sub-ice-shelf circulation using partial top cells (Losch, 2008) and boundary layer physics following Holland and Jenkins (1999), Jenkins (2001), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008) and other continental-scale simulations and melt-rate observations (Kimura et al., 2013; Rignot et al., 2013). For the POPSICLES Antarctic-Southern Ocean simulations, ice sheet and ocean models communicate at one-month coupling intervals.

  14. Scidac-Data: Enabling Data Driven Modeling of Exascale Computing

    DOE PAGES

    Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; ...

    2017-11-23

    Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less

  15. Scidac-Data: Enabling Data Driven Modeling of Exascale Computing

    NASA Astrophysics Data System (ADS)

    Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert

    2017-10-01

    The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.

  16. Scidac-Data: Enabling Data Driven Modeling of Exascale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo

    Here, the SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulationsmore » are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Zhaojun; Scalettar, Richard; Savrasov, Sergey

    This report summarizes the accomplishments of the University of California Davis team which is part of a larger SciDAC collaboration including Mark Jarrell of Louisiana State University, Karen Tomko of the Ohio Supercomputer Center, and Eduardo F. D'Azevedo and Thomas A. Maier of Oak Ridge National Laboratory. In this report, we focus on the major UCD accomplishments. As the paper authorship list emphasizes, much of our work is the result of a tightly integrated effort; hence this compendium of UCD efforts of necessity contains some overlap with the work at our partner institutions.

  18. Scientific Data Management Center for Enabling Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vouk, Mladen A.

    Managing scientific data has been identified by the scientific community as one of the most important emerging needs because of the sheer volume and increasing complexity of data being collected. Effectively generating, managing, and analyzing this information requires a comprehensive, end-to-end approach to data management that encompasses all of the stages from the initial data acquisition to the final analysis of the data. Fortunately, the data management problems encountered by most scientific domains are common enough to be addressed through shared technology solutions. Based on community input, we have identified three significant requirements. First, more efficient access to storage systemsmore » is needed. In particular, parallel file system and I/O system improvements are needed to write and read large volumes of data without slowing a simulation, analysis, or visualization engine. These processes are complicated by the fact that scientific data are structured differently for specific application domains, and are stored in specialized file formats. Second, scientists require technologies to facilitate better understanding of their data, in particular the ability to effectively perform complex data analysis and searches over extremely large data sets. Specialized feature discovery and statistical analysis techniques are needed before the data can be understood or visualized. Furthermore, interactive analysis requires techniques for efficiently selecting subsets of the data. Finally, generating the data, collecting and storing the results, keeping track of data provenance, data post-processing, and analysis of results is a tedious, fragmented process. Tools for automation of this process in a robust, tractable, and recoverable fashion are required to enhance scientific exploration. The SDM center was established under the SciDAC program to address these issues. The SciDAC-1 Scientific Data Management (SDM) Center succeeded in bringing an initial set of advanced data management technologies to DOE application scientists in astrophysics, climate, fusion, and biology. Equally important, it established collaborations with these scientists to better understand their science as well as their forthcoming data management and data analytics challenges. Building on our early successes, we have greatly enhanced, robustified, and deployed our technology to these communities. In some cases, we identified new needs that have been addressed in order to simplify the use of our technology by scientists. This report summarizes our work so far in SciDAC-2. Our approach is to employ an evolutionary development and deployment process: from research through prototypes to deployment and infrastructure. Accordingly, we have organized our activities in three layers that abstract the end-to-end data flow described above. We labeled the layers (from bottom to top): a) Storage Efficient Access (SEA), b) Data Mining and Analysis (DMA), c) Scientific Process Automation (SPA). The SEA layer is immediately on top of hardware, operating systems, file systems, and mass storage systems, and provides parallel data access technology, and transparent access to archival storage. The DMA layer, which builds on the functionality of the SEA layer, consists of indexing, feature identification, and parallel statistical analysis technology. The SPA layer, which is on top of the DMA layer, provides the ability to compose scientific workflows from the components in the DMA layer as well as application specific modules. NCSU work performed under this contract was primarily at the SPA layer.« less

  19. Gyrokinetic continuum simulations of turbulence in the Texas Helimak

    NASA Astrophysics Data System (ADS)

    Bernard, T. N.; Shi, E. L.; Hammett, G. W.; Hakim, A.; Taylor, E. I.

    2017-10-01

    We have used the Gkeyll code to perform 3x-2v full-f gyrokinetic continuum simulations of electrostatic plasma turbulence in the Texas Helimak. The Helimak is an open field-line experiment with magnetic curvature and shear. It is useful for validating numerical codes due to its extensive diagnostics and simple, helical geometry, which is similar to the scrape-off layer region of tokamaks. Interchange and drift-wave modes are the main turbulence mechanisms in the device, and potential biasing is applied to study the effect of velocity shear on turbulence reduction. With Gkeyll, we varied field-line pitch angle and simulated biased and unbiased cases to study different turbulent regimes and turbulence reduction. These are the first kinetic simulations of the Helimak and resulting plasma profiles agree fairly well with experimental data. This research demonstrates Gkeyll's progress towards 5D simulations of the SOL region of fusion devices. Supported by the U.S. DOE SCGSR program under contract DE-SC0014664, the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE contract DE-AC02-09CH11466.

  20. Waveguide to Core: A New Approach to RF Modelling

    NASA Astrophysics Data System (ADS)

    Wright, John; Shiraiwa, Syunichi; Rf-Scidac Team

    2017-10-01

    A new technique for the calculation of RF waves in toroidal geometry enables the simultaneous incorporation of antenna geometry, plasma facing components (PFCs), the scrape off-layer (SOL) and core propagation [Shiraiwa, NF 2017]. Calculations with this technique naturally capture wave propagation in the SOL and its interactions with non-conforming PFCs permitting self-consistent calculation of core absorption and edge power loss. The main motivating insight is that the core plasma region having closed flux surfaces requires a hot plasma dielectric while the open field line region in the scrape-off layer needs only a cold plasma dielectric. Spectral approaches work well for the former and finite elements work well for the latter. The validity of this process follows directly from the superposition principle of Maxwell's equations making this technique exact. The method is independent of the codes or representations used and works for any frequency regime. Applications to minority heating in Alcator C-Mod and ITER and high harmonic heating in NSTX-U will be presented in single pass and multi-pass regimes. Support from DoE Grant Number DE-FG02-91-ER54109 (theory and computer resources) and DE-FC02-01ER54648 (RF SciDAC).

  1. SciDAC Computational Astrophysics Consortium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrows, Adam

    Supernova explosions are the central events in nuclear astrophysics. The core-collapse variety is a major source for the universe's heavy elements. The neutron stars, pulsars, and stellar-mass black holes of high-energy astrophysics are their products. Given their prodigious explosion energies, they are the major agencies of change in the interstellar medium, driving star formation and the evolution of galaxies. Their gas remnants are the birthplaces of the cosmic rays. Such is their brightness that they can be used as standard candles to measure the size and geometry of the universe. Recently, there is evidence that gamma-ray bursts (GRBs) originate inmore » a small fraction of core collapses, thereby connecting two of the most energetic phenomena in the universe. However, the mechanism by which core-collapse supernovae explode has not yet been unambiguously determined. Arguably, this is one of the great unsolved problems in modern astrophysics and its investigation draws on nuclear physics, particle physics, radiative transfer, kinetic theory, gravitational physics, thermodynamics, and the numerical arts. Hence, supernovae are unrivaled astrophysical laboratories. It is the quest for the mechanism and new insights our team has recently had that motivate this proposal.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold H. Kritz

    PTRANSP, which is the predictive version of the TRANSP code, was developed in a collaborative effort involving the Princeton Plasma Physics Laboratory, General Atomics Corporation, Lawrence Livermore National Laboratory, and Lehigh University. The PTRANSP/TRANSP suite of codes is the premier integrated tokamak modeling software in the United States. A production service for PTRANSP/TRANSP simulations is maintained at the Princeton Plasma Physics Laboratory; the server has a simple command line client interface and is subscribed to by about 100 researchers from tokamak projects in the US, Europe, and Asia. This service produced nearly 13000 PTRANSP/TRANSP simulations in the four year periodmore » FY 2005 through FY 2008. Major archives of TRANSP results are maintained at PPPL, MIT, General Atomics, and JET. Recent utilization, counting experimental analysis simulations as well as predictive simulations, more than doubled from slightly over 2000 simulations per year in FY 2005 and FY 2006 to over 4300 simulations per year in FY 2007 and FY 2008. PTRANSP predictive simulations applied to ITER increased eight fold from 30 simulations per year in FY 2005 and FY 2006 to 240 simulations per year in FY 2007 and FY 2008, accounting for more than half of combined PTRANSP/TRANSP service CPU resource utilization in FY 2008. PTRANSP studies focused on ITER played a key role in journal articles. Examples of validation studies carried out for momentum transport in PTRANSP simulations were presented at the 2008 IAEA conference. The increase in number of PTRANSP simulations has continued (more than 7000 TRANSP/PTRANSP simulations in 2010) and results of PTRANSP simulations appear in conference proceedings, for example the 2010 IAEA conference, and in peer reviewed papers. PTRANSP provides a bridge to the Fusion Simulation Program (FSP) and to the future of integrated modeling. Through years of widespread usage, each of the many parts of the PTRANSP suite of codes has been thoroughly validated against experimental data and benchmarked against other codes. At the same time, architectural modernizations are improving the modularity of the PTRANSP code base. The NUBEAM neutral beam and fusion products fast ion model, the Plasma State data repository (developed originally in the SWIM SciDAC project and adapted for use in PTRANSP), and other components are already shared with the SWIM, FACETS, and CPES SciDAC FSP prototype projects. Thus, the PTRANSP code is already serving as a bridge between our present integrated modeling capability and future capability. As the Fusion Simulation Program builds toward the facility currently available in the PTRANSP suite of codes, early versions of the FSP core plasma model will need to be benchmarked against the PTRANSP simulations. This will be necessary to build user confidence in FSP, but this benchmarking can only be done if PTRANSP itself is maintained and developed.« less

  3. The Social Network of Tracer Variations and O(100) Uncertain Photochemical Parameters in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Labute, M.; Chowdhary, K.; Debusschere, B.; Cameron-Smith, P. J.

    2014-12-01

    Simulating the atmospheric cycles of ozone, methane, and other radiatively important trace gases in global climate models is computationally demanding and requires the use of 100's of photochemical parameters with uncertain values. Quantitative analysis of the effects of these uncertainties on tracer distributions, radiative forcing, and other model responses is hindered by the "curse of dimensionality." We describe efforts to overcome this curse using ensemble simulations and advanced statistical methods. Uncertainties from 95 photochemical parameters in the trop-MOZART scheme were sampled using a Monte Carlo method and propagated through 10,000 simulations of the single column version of the Community Atmosphere Model (CAM). The variance of the ensemble was represented as a network with nodes and edges, and the topology and connections in the network were analyzed using lasso regression, Bayesian compressive sensing, and centrality measures from the field of social network theory. Despite the limited sample size for this high dimensional problem, our methods determined the key sources of variation and co-variation in the ensemble and identified important clusters in the network topology. Our results can be used to better understand the flow of photochemical uncertainty in simulations using CAM and other climate models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC).

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catterall, Simon

    This final report summarizes the work carried out by the Syracuse component of a multi-institutional SciDAC grant led by USQCD. This grant supported software development for theoretical high energy physics. The Syracuse component specifically targeted the development of code for the numerical simulation of N=4 super Yang-Mills theory. The work described in the final report includes this and a summary of results achieve in exploring the structure of this theory. It also describes the personnel - students and a postdoc who were directly or indirectly involved in this project. A list of publication is also described.

  5. Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity and Energy Frontiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brower, Richard C.

    This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data thatmore » will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.« less

  6. Active control of ECCD-induced tearing mode stabilization in coupled NIMROD/GENRAY HPC simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Kruger, Scott; Held, Eric

    2013-10-01

    Actively controlled ECCD applied in or near magnetic islands formed by NTMs has been successfully shown to control/suppress these modes, despite uncertainties in island O-point locations (where induced current is most stabilizing) relative to the RF deposition region. Integrated numerical models of the mode stabilization process can resolve these uncertainties and augment experimental efforts to determine optimal ITER NTM stabilization strategies. The advanced SWIM model incorporates RF effects in the equations/closures of extended MHD as 3D (not toroidal or bounce-averaged) quasilinear diffusion coefficients. Equilibration of driven current within the island geometry is modeled using the same extended MHD dynamics governing the physics of island formation, yielding a more accurate/self-consistent picture of island response to RF drive. Additionally, a numerical active feedback control system gathers data from synthetic diagnostics to dynamically trigger & spatially align the RF fields. Computations which model the RF deposition using ray tracing, assemble the 3D QL operator from ray & profile data, calculate the resultant xMHD forces, and dynamically realign the RF to more efficiently stabilize modes are presented; the efficacy of various control strategies is also discussed. Supported by the SciDAC Center for Extended MHD Modeling (CEMM); see also https://cswim.org.

  7. Using High Performance Computing to Understand Roles of Labile and Nonlabile U(VI) on Hanford 300 Area Plume Longevity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lichtner, Peter C.; Hammond, Glenn E.

    Evolution of a hexavalent uranium [U(VI)] plume at the Hanford 300 Area bordering the Columbia River is investigated to evaluate the roles of labile and nonlabile forms of U(VI) on the longevity of the plume. A high fidelity, three-dimensional, field-scale, reactive flow and transport model is used to represent the system. Richards equation coupled to multicomponent reactive transport equations are solved for times up to 100 years taking into account rapid fluctuations in the Columbia River stage resulting in pulse releases of U(VI) into the river. The peta-scale computer code PFLOTRAN developed under a DOE SciDAC-2 project is employed inmore » the simulations and executed on ORNL's Cray XT5 supercomputer Jaguar. Labile U(VI) is represented in the model through surface complexation reactions and its nonlabile form through dissolution of metatorbernite used as a surrogate mineral. Initial conditions are constructed corresponding to the U(VI) plume already in place to avoid uncertainties associated with the lack of historical data for the waste stream. The cumulative U(VI) flux into the river is compared for cases of equilibrium and multirate sorption models and for no sorption. The sensitivity of the U(VI) flux into the river on the initial plume configuration is investigated. The presence of nonlabile U(VI) was found to be essential in explaining the longevity of the U(VI) plume and the prolonged high U(VI) concentrations at the site exceeding the EPA MCL for uranium.« less

  8. 3D toroidal physics: testing the boundaries of symmetry breaking

    NASA Astrophysics Data System (ADS)

    Spong, Don

    2014-10-01

    Toroidal symmetry is an important concept for plasma confinement; it allows the existence of nested flux surface MHD equilibria and conserved invariants for particle motion. However, perfect symmetry is unachievable in realistic toroidal plasma devices. For example, tokamaks have toroidal ripple due to discrete field coils, optimized stellarators do not achieve exact quasi-symmetry, the plasma itself continually seeks lower energy states through helical 3D deformations, and reactors will likely have non-uniform distributions of ferritic steel near the plasma. Also, some level of designed-in 3D magnetic field structure is now anticipated for most concepts in order to lead to a stable, steady-state fusion reactor. Such planned 3D field structures can take many forms, ranging from tokamaks with weak 3D ELM-suppression fields to stellarators with more dominant 3D field structures. There is considerable interest in the development of unified physics models for the full range of 3D effects. Ultimately, the questions of how much symmetry breaking can be tolerated and how to optimize its design must be addressed for all fusion concepts. Fortunately, significant progress is underway in theory, computation and plasma diagnostics on many issues such as magnetic surface quality, plasma screening vs. amplification of 3D perturbations, 3D transport, influence on edge pedestal structures, MHD stability effects, modification of fast ion-driven instabilities, prediction of energetic particle heat loads on plasma-facing materials, effects of 3D fields on turbulence, and magnetic coil design. A closely coupled program of simulation, experimental validation, and design optimization is required to determine what forms and amplitudes of 3D shaping and symmetry breaking will be compatible with future fusion reactors. The development of models to address 3D physics and progress in these areas will be described. This work is supported both by the US Department of Energy under Contract DE-AC05-00OR22725 with UT-Battelle, LLC and under the US DOE SciDAC GSEP Center.

  9. Study of Plasma Liner Driven Magnetized Target Fusion Via Advanced Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samulyak, Roman V.; Brookhaven National Lab.; Parks, Paul

    The feasibility of the plasma liner driven Magnetized Target Fusion (MTF) via terascale numerical simulations will be assessed. In the MTF concept, a plasma liner, formed by merging of a number (60 or more) of radial, highly supersonic plasma jets, implodes on the target in the form of two compact plasma toroids, and compresses it to conditions of the fusion ignition. By avoiding major difficulties associated with both the traditional laser driven inertial confinement fusion and solid liner driven MTF, the plasma liner driven MTF potentially provides a low-cost and fast R&D path towards the demonstration of practical fusion energy.more » High fidelity numerical simulations of full nonlinear models associated with the plasma liner MTF using state-of-art numerical algorithms and terascale computing are necessary in order to resolve uncertainties and provide guidance for future experiments. At Stony Brook University, we have developed unique computational capabilities that ideally suite the MTF problem. The FronTier code, developed in collaboration with BNL and LANL under DOE funding including SciDAC for the simulation of 3D multi-material hydro and MHD flows, has beenbenchmarked and used for fundamental and engineering problems in energy science applications. We have performed 3D simulations of converging supersonic plasma jets, their merger and the formation of the plasma liner, and a study of the corresponding oblique shock problem. We have studied the implosion of the plasma liner on the magnetized plasma target by resolving Rayleigh-Taylor instabilities in 2D and 3D and other relevant physics and estimate thermodynamic conditions of the target at the moment of maximum compression and the hydrodynamic efficiency of the method.« less

  10. Hybrid simulation of fishbone instabilities in the EAST tokamak

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Fu, Guoyong; Wang, Feng; Xu, Liqing; Li, Guoqiang; Liu, Chengyue; EAST Team

    2017-10-01

    Hybrid simulations with the global kinetic- MHD code M3D-K have been carried out to investigate the linear stability and nonlinear dynamics of beam-driven fishbone in EAST experiment. Linear simulations show that a low frequency fishbone instability is excited at experimental value of beam ion pressure. The mode is mainly driven by low energy beam ions via precessional resonance. The results are consistent with the experimental measurement with respect to mode frequency and mode structure. When the beam ion pressure is increased to exceed a critical value, the low frequency mode transits to a BAE with much higher frequency. Nonlinear simulations show that the frequency of the low frequency fishbone chirps up and down with corresponding hole-clump structures in phase space, consistent with the Berk-Breizman theory. In addition to the low frequency mode, the high frequency BAE is excited during the nonlinear evolution. For the transient case of beam pressure fraction where the low and high frequency modes are simultaneously excited in the linear phase, only one dominant mode appears in the nonlinear phase with frequency jumps up and down during nonlinear evolution. This work is supported by the National Natural Science Foundation of China under Grant Nos. 11605245 and 11505022, and the CASHIPS Director's Fund under Grant No. YZJJ201510, and the Department of Energy Scientific Discovery through Advanced Computing (SciDAC) under Grant No. DE-AC02-09CH11466.

  11. Advanced Discontinuous Galerkin Algorithms and First Open-Field Line Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hammett, G. W.; Hakim, A.; Shi, E. L.

    2016-10-01

    New versions of Discontinuous Galerkin (DG) algorithms have interesting features that may help with challenging problems of higher-dimensional kinetic problems. We are developing the gyrokinetic code Gkeyll based on DG. DG also has features that may help with the next generation of Exascale computers. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communications costs (which are a bottleneck at exascale). DG uses efficient Gaussian quadrature like finite elements, but keeps the calculation local for the kinetic solver, also reducing communication. Sparse grid methods might further reduce the cost significantly in higher dimensions. The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature as used in popular δf gyrokinetic codes. Consistent basis functions avoid high-frequency numerical modes from electromagnetic terms. We will show our first results of 3 x + 2 v simulations of open-field line/SOL turbulence in a simple helical geometry (like Helimak/TORPEX), with parameters from LAPD, TORPEX, and NSTX. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  12. Final report for Utah State's SciDAC CEMM contribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. Eric Held

    2008-05-13

    This document represents a summary of work carried out at Utah State University in conjunction with the Center for Extended Magnetohyrodynamic Modeling (CEMM). The principal investigator, Dr. Eric Held, was aided in this work by two former graduate students, Drs. John James and Michael Addae-Kagyah, who completed their PhD's while being partially funded by CEMM monies. In addtion, Dr. Jeong-Young Ji, a postdoctoral researcher and Mukta Sharma, a graduate student were supported. The work associated with this grant focused on developing an efficient, hybrid fluid/kinetic model for fusion plasmas. Specifically, expressions for the parallel heat fluxes and stresses in magnetizedmore » plasmas were implemented and exercised in the NIMROD plasma fluid code.« less

  13. Scalable Data Management, Analysis, and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Han-Wei

    This report is the entire final report for the SciDAC project authored by the whole team. OSU is part of the contributors to the report. This report is organized into sections and subsections, each covering an area of development and deployment of technologies applied to scientific applications of interest to the Department of Energy. Each sub-section includes: 1) a summary description of the research, development, and deployment carried out, the results and the extent to which the stated project objectives were met; 2) significant results, including major findings, developments, or conclusions; 3) products, such as publications and presentations, software developed,more » project website(s), technologies or techniques, inventions, awards, etc., and 4) conclusions of the projects and future directions for research, development, and deployment in this technology area.« less

  14. The Earth System Grid Center for Enabling Technologies (ESG-CET): Scaling the Earth System Grid to Petascale Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.

    2007-09-27

    This report, which summarizes work carried out by the ESG-CET during the period April 1, 2007 through September 30, 2007, includes discussion of overall progress, period goals, highlights, collaborations and presentations. To learn more about our project, please visit the Earth System Grid website. In addition, this report will be forwarded to the DOE SciDAC project management, the Office of Biological and Environmental Research (OBER) project management, national and international stakeholders (e.g., the Community Climate System Model (CCSM), the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5), the Climate Science Computational End Station (CCES), etc.), and collaborators. Themore » ESG-CET executive committee consists of David Bernholdt, ORNL; Ian Foster, ANL; Don Middleton, NCAR; and Dean Williams, LLNL. The ESG-CET team is a collective of researchers and scientists with diverse domain knowledge, whose home institutions include seven laboratories (ANL, LANL, LBNL, LLNL, NCAR, ORNL, PMEL) and one university (ISI/USC); all work in close collaboration with the project's stakeholders and domain researchers and scientists. During this semi-annual reporting period, the ESG-CET increased its efforts on completing requirement documents, framework design, and component prototyping. As we strove to complete and expand the overall ESG-CET architectural plans and use-case scenarios to fit our constituency's scope of use, we continued to provide production-level services to the community. These services continued for IPCC AR4, CCES, and CCSM, and were extended to include Cloud Feedback Model Intercomparison Project (CFMIP) data.« less

  15. Final Progress Report The U.S. Department of Energy Research Grant No. DE-SC0008660 Plasma Surface Interactions: Bridging from the Surface to the Micron Frontier through Leadership Class Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasheninnikov, Sergei; Smirnov, Roman; Guterl, Jerome

    The choice of material for the plasma facing components (PFC), in particular, for divertor targets, is one of the main issues for future tokamak reactors. There are two major requirements for the PFC’s material: acceptable level of tritium retention and durability in a harsh environment of fusion grade plasma. Based on these criteria, some years ago it was decided that tungsten is an acceptable material for divertor targets in ITER. However, further experimental studies reveal that the irradiation of tungsten even with low energetic (well below sputtering threshold!) He containing plasma causes significant modification of surface morphology, formation of themore » layer of He nano-bubbles (in the temperature range T<1000 K), “fuzz” (for 1000 K2000 K) (e.g. see Fig. 1). Recall that He, being an ash of D-T fusion reactions, is an inherent impurity in fusion plasma. The goals of the UCSD Applied Plasma Theory Group was: i) investigate the mechanisms of the formation of He nano-bubble layer and fuzz growth under He irradiation, as well as the physics of transport of hydrogen species in tungsten lattice, and ii) develop physics understanding of the models suitable for the incorporation into the Xolotl-PSI code based on the reaction-diffusion approach, which is the flagship of the whole SciDAC project [8], which can guide both numerical simulations and experimental studies. Here we just highlight our major accomplishments.« less

  16. Nuclei and Fundamental Symmetries

    NASA Astrophysics Data System (ADS)

    Haxton, Wick

    2016-09-01

    Nuclei provide marvelous laboratories for testing fundamental interactions, often enhancing weak processes through accidental degeneracies among states, and providing selection rules that can be exploited to isolate selected interactions. I will give an overview of current work, including the use of parity violation to probe unknown aspects of the hadronic weak interaction; nuclear electric dipole moment searches that may shed light on new sources of CP violation; and tests of lepton number violation made possible by the fact that many nuclei can only decay by rare second-order weak interactions. I will point to opportunities in both theory and experiment to advance the field. Based upon work supported in part by the US Department of Energy, Office of Science, Office of Nuclear Physics and SciDAC under Awards DE-SC00046548 (Berkeley), DE-AC02-05CH11231 (LBNL), and KB0301052 (LBNL).

  17. Nonlinear electromagnetic gyrokinetic particle simulations with the electron hybrid model

    NASA Astrophysics Data System (ADS)

    Nishimura, Y.; Lin, Z.; Chen, L.; Hahm, T.; Wang, W.; Lee, W.

    2006-10-01

    The electromagnetic model with fluid electrons is successfully implemented into the global gyrokinetic code GTC. In the ideal MHD limit, shear Alfven wave oscillation and continuum damping is demonstrated. Nonlinear electromagnetic simulation is further pursued in the presence of finite ηi. Turbulence transport in the AITG unstable β regime is studied. This work is supported by Department of Energy (DOE) Grant DE-FG02-03ER54724, Cooperative Agreement No. DE-FC02-04ER54796 (UCI), DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by SciDAC Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas. Z. Lin, et al., Science 281, 1835 (1998). F. Zonca and L. Chen, Plasma Phys. Controlled Fusion 30, 2240 (1998); G. Zhao and L. Chen, Phys. Plasmas 9, 861 (2002).

  18. The Simulated Impact of Dimethyl Sulfide Emissions on the Earth System

    NASA Astrophysics Data System (ADS)

    Cameron-Smith, P. J.; Elliott, S.; Shrivastava, M. B.; Burrows, S. M.; Maltrud, M. E.; Lucas, D. D.; Ghan, S.

    2015-12-01

    Dimethyl sulfide (DMS) is one of many biologically derived gases and particles emitted from the ocean that has the potential to affect climate. In the case of DMS it is oxidized to sulfate, which increases the aerosol loading in the atmosphere either through nucleation or condensation on other aerosols, which in turn changes the energy balance of the Earth by reflection of sunlight either through direct reflection by the aerosols or by modifying clouds. We have previously shown that the geographical distribution of DMS emission from the ocean may be quite sensitive to climate changes, especially in the Southern Ocean. Our state-of-the-art sulfur-cycle Earth system model (ESM), based on the Community Earth System Model (CESM) climate model, includes an ocean sulfur ecosystem model, the oxidation of DMS to sulfate by atmospheric chemistry, and the indirect effect of sulfate on radiation via clouds using the Modal Aerosol Model (MAM). Our multi-decadal simulations calculate the impact of DMS on the energy balance and climate of the Earth system, and its sensitivity/feedback to climate change. The estimate from our simulations is that DMS is responsible for ~6 W/m2 of reflected sunlight in the pre-industrial era (globally averaged), and ~4 W/m2 in the present era. The reduction is caused by increased competition with cloud condensation nuclei from anthropogenic aerosols in the present era, and therefore partially offsets the cooling from the anthropogenic aerosols. The distribution of these effects are not uniform, and doesn't necessarily follow the simulated DMS distribution, because some clouds are more sensitive to DMS derived sulfate than others, and there are surface feedbacks such as the ice-albedo feedback. Although our calculated impact of DMS is higher than some previous studies, it is not much higher than recent observational estimates (McCoy, et al., 2015). We are now porting these capabilities to the US Department of Energy's Accelerated Climate Modeling for Energy (ACME) model. This work was conducted by the ACME and SciDAC programs of the Office of Biological and Environmental Research and the Office of Advanced Scientific Computing Research of the U.S. Department of Energy. Prepared by LLNL under Contract DE-AC52-07NA27344.

  19. Bayesian truncation errors in chiral effective field theory: model checking and accounting for correlations

    NASA Astrophysics Data System (ADS)

    Melendez, Jordan; Wesolowski, Sarah; Furnstahl, Dick

    2017-09-01

    Chiral effective field theory (EFT) predictions are necessarily truncated at some order in the EFT expansion, which induces an error that must be quantified for robust statistical comparisons to experiment. A Bayesian model yields posterior probability distribution functions for these errors based on expectations of naturalness encoded in Bayesian priors and the observed order-by-order convergence pattern of the EFT. As a general example of a statistical approach to truncation errors, the model was applied to chiral EFT for neutron-proton scattering using various semi-local potentials of Epelbaum, Krebs, and Meißner (EKM). Here we discuss how our model can learn correlation information from the data and how to perform Bayesian model checking to validate that the EFT is working as advertised. Supported in part by NSF PHY-1614460 and DOE NUCLEI SciDAC DE-SC0008533.

  20. Disordered nuclear pasta, magnetic field decay, and crust cooling in neutron stars

    NASA Astrophysics Data System (ADS)

    Horowitz, C. J.; Berry, D. K.; Briggs, C. M.; Caplan, M. E.; Cumming, A.; Schneider, A. S.

    2015-04-01

    Nuclear pasta, with non-spherical shapes, is expected near the base of the crust in neutron stars. Large scale molecular dynamics simulations of pasta show long lived topological defects that could increase electron scattering and reduce both the thermal and electrical conductivities. We model a possible low conductivity pasta layer by increasing an impurity parameter Qimp. Predictions of light curves for the low mass X-ray binary MXB 1659-29, assuming a large Qimp, find continued late time cooling that is consistent with Chandra observations. The electrical and thermal conductivities are likely related. Therefore observations of late time crust cooling can provide insight on the electrical conductivity and the possible decay of neutron star magnetic fields (assuming these are supported by currents in the crust). This research was supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  1. A tensor network approach to many-body localization

    NASA Astrophysics Data System (ADS)

    Yu, Xiongjie; Pekker, David; Clark, Bryan

    Understanding the many-body localized phase requires access to eigenstates in the middle of the many-body spectrum. While exact-diagonalization is able to access these eigenstates, it is restricted to systems sizes of about 22 spins. To overcome this limitation, we develop tensor network algorithms which increase the accessible system size by an order of magnitude. We describe both our new algorithms as well as the additional physics about MBL we can extract from them. For example, we demonstrate the power of these methods by verifying the breakdown of the Eigenstate Thermalization Hypothesis (ETH) in the many-body localized phase of the random field Heisenberg model, and show the saturation of entanglement in the MBL phase and generate eigenstates that differ by local excitations. Work was supported by AFOSR FA9550-10-1-0524 and FA9550-12-1-0057, the Kaufmann foundation, and SciDAC FG02-12ER46875.

  2. Mississippi Curriculum Framework for Computer Information Systems Technology. Computer Information Systems Technology (Program CIP: 52.1201--Management Information Systems & Business Data). Computer Programming (Program CIP: 52.1201). Network Support (Program CIP: 52.1290--Computer Network Support Technology). Postsecondary Programs.

    ERIC Educational Resources Information Center

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…

  3. 78 FR 73195 - Privacy Act of 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-05

    .... Description of the Matching Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub... 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching Program Match No. 1312...). ACTION: Notice of Computer Matching Program (CMP). SUMMARY: In accordance with the requirements of the...

  4. Handheld Computer Use in U.S. Family Practice Residency Programs

    PubMed Central

    Criswell, Dan F.; Parchman, Michael L.

    2002-01-01

    Objective: The purpose of the study was to evaluate the uses of handheld computers (also called personal digital assistants, or PDAs) in family practice residency programs in the United States. Study Design: In November 2000, the authors mailed a questionnaire to the program directors of all American Academy of Family Physicians (AAFP) and American College of Osteopathic Family Practice (ACOFP) residency programs in the United States. Measurements: Data and patterns of the use and non-use of handheld computers were identified. Results: Approximately 50 percent (306 of 610) of the programs responded to the survey. Two thirds of the programs reported that handheld computers were used in their residencies, and an additional 14 percent had plans for implementation within 24 months. Both the Palm and the Windows CE operating systems were used, with the Palm operating system the most common. Military programs had the highest rate of use (8 of 10 programs, 80 percent), and osteopathic programs had the lowest (23 of 55 programs, 42 percent). Of programs that reported handheld computer use, 45 percent had required handheld computer applications that are used uniformly by all users. Funding for handheld computers and related applications was non-budgeted in 76percent of the programs in which handheld computers were used. In programs providing a budget for handheld computers, the average annual budget per user was $461.58. Interested faculty or residents, rather than computer information services personnel, performed upkeep and maintenance of handheld computers in 72 percent of the programs in which the computers are used. In addition to the installed calendar, memo pad, and address book, the most common clinical uses of handheld computers in the programs were as medication reference tools, electronic textbooks, and clinical computational or calculator-type programs. Conclusions: Handheld computers are widely used in family practice residency programs in the United States. Although handheld computers were designed as electronic organizers, in family practice residencies they are used as medication reference tools, electronic textbooks, and clinical computational programs and to track activities that were previously associated with desktop database applications. PMID:11751806

  5. Handheld computer use in U.S. family practice residency programs.

    PubMed

    Criswell, Dan F; Parchman, Michael L

    2002-01-01

    The purpose of the study was to evaluate the uses of handheld computers (also called personal digital assistants, or PDAs) in family practice residency programs in the United States. In November 2000, the authors mailed a questionnaire to the program directors of all American Academy of Family Physicians (AAFP) and American College of Osteopathic Family Practice (ACOFP) residency programs in the United States. Data and patterns of the use and non-use of handheld computers were identified. Approximately 50 percent (306 of 610) of the programs responded to the survey. Two thirds of the programs reported that handheld computers were used in their residencies, and an additional 14 percent had plans for implementation within 24 months. Both the Palm and the Windows CE operating systems were used, with the Palm operating system the most common. Military programs had the highest rate of use (8 of 10 programs, 80 percent), and osteopathic programs had the lowest (23 of 55 programs, 42 percent). Of programs that reported handheld computer use, 45 percent had required handheld computer applications that are used uniformly by all users. Funding for handheld computers and related applications was non-budgeted in 76percent of the programs in which handheld computers were used. In programs providing a budget for handheld computers, the average annual budget per user was 461.58 dollars. Interested faculty or residents, rather than computer information services personnel, performed upkeep and maintenance of handheld computers in 72 percent of the programs in which the computers are used. In addition to the installed calendar, memo pad, and address book, the most common clinical uses of handheld computers in the programs were as medication reference tools, electronic textbooks, and clinical computational or calculator-type programs. Handheld computers are widely used in family practice residency programs in the United States. Although handheld computers were designed as electronic organizers, in family practice residencies they are used as medication reference tools, electronic textbooks, and clinical computational programs and to track activities that were previously associated with desktop database applications.

  6. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  7. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG executive system was developed for the CDC 6000 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. Each computer program maintains its individual identity and is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG executive system. The installation and uses of the DIALOG executive system are described.

  8. A CAD (Classroom Assessment Design) of a Computer Programming Course

    ERIC Educational Resources Information Center

    Hawi, Nazir S.

    2012-01-01

    This paper presents a CAD (classroom assessment design) of an entry-level undergraduate computer programming course "Computer Programming I". CAD has been the product of a long experience in teaching computer programming courses including teaching "Computer Programming I" 22 times. Each semester, CAD is evaluated and modified…

  9. Comparison of two computer programs by predicting turbulent mixing of helium in a ducted supersonic airstream

    NASA Technical Reports Server (NTRS)

    Pan, Y. S.; Drummond, J. P.; Mcclinton, C. R.

    1978-01-01

    Two parabolic flow computer programs, SHIP (a finite-difference program) and COMOC (a finite-element program), are used for predicting three-dimensional turbulent reacting flow fields in supersonic combustors. The theoretical foundation of the two computer programs are described, and then the programs are applied to a three-dimensional turbulent mixing experiment. The cold (nonreacting) flow experiment was performed to study the mixing of helium jets with a supersonic airstream in a rectangular duct. Surveys of the flow field at an upstream were used as the initial data by programs; surveys at a downstream station provided comparison to assess program accuracy. Both computer programs predicted the experimental results and data trends reasonably well. However, the comparison between the computations from the two programs indicated that SHIP was more accurate in computation and more efficient in both computer storage and computing time than COMOC.

  10. Computer program CDCID: an automated quality control program using CDC update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, G.L.; Aguilar, F.

    1984-04-01

    A computer program, CDCID, has been developed in coordination with a quality control program to provide a highly automated method of documenting changes to computer codes at EG and G Idaho, Inc. The method uses the standard CDC UPDATE program in such a manner that updates and their associated documentation are easily made and retrieved in various formats. The method allows each card image of a source program to point to the document which describes it, who created the card, and when it was created. The method described is applicable to the quality control of computer programs in general. Themore » computer program described is executable only on CDC computing systems, but the program could be modified and applied to any computing system with an adequate updating program.« less

  11. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  12. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG Executive System has been developed for the Univac 1100 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. The unique feature of the DIALOG Executive System is the manner in which computer programs are linked. Each program maintains its individual identity and as such is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG Executive System. The installation and use of the DIALOG Executive System are described at Johnson Space Center.

  13. Programming the social computer.

    PubMed

    Robertson, David; Giunchiglia, Fausto

    2013-03-28

    The aim of 'programming the global computer' was identified by Milner and others as one of the grand challenges of computing research. At the time this phrase was coined, it was natural to assume that this objective might be achieved primarily through extending programming and specification languages. The Internet, however, has brought with it a different style of computation that (although harnessing variants of traditional programming languages) operates in a style different to those with which we are familiar. The 'computer' on which we are running these computations is a social computer in the sense that many of the elementary functions of the computations it runs are performed by humans, and successful execution of a program often depends on properties of the human society over which the program operates. These sorts of programs are not programmed in a traditional way and may have to be understood in a way that is different from the traditional view of programming. This shift in perspective raises new challenges for the science of the Web and for computing in general.

  14. Gender Differences in the Use of Computers, Programming, and Peer Interactions in Computer Science Classrooms

    ERIC Educational Resources Information Center

    Stoilescu, Dorian; Egodawatte, Gunawardena

    2010-01-01

    Research shows that female and male students in undergraduate computer science programs view computer culture differently. Female students are interested more in the use of computers than in doing programming, whereas male students see computer science mainly as a programming activity. The overall purpose of our research was not to find new…

  15. Quantum Monte Carlo Simulation of condensed van der Waals Systems

    NASA Astrophysics Data System (ADS)

    Benali, Anouar; Shulenburger, Luke; Romero, Nichols A.; Kim, Jeongnim; Anatole von Lilienfeld, O.

    2012-02-01

    Van der Waals forces are as ubiquitous as infamous. While post-Hartree-Fock methods enable accurate estimates of these forces in molecules and clusters, they remain elusive for dealing with many-electron condensed phase systems. We present Quantum Monte Carlo [1,2] results for condensed van der Waals systems. Interatomic many-body contributions to cohesive energies and bulk modulus will be discussed. Numerical evidence is presented for crystals of rare gas atoms, and compared to experiments and methods [3]. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.[4pt] [1] J. Kim, K. Esler, J. McMinis and D. Ceperley, SciDAC 2010, J. of Physics: Conference series, Chattanooga, Tennessee, July 11 2011 [0pt] [2] QMCPACK simulation suite, http://qmcpack.cmscc.org (unpublished)[0pt] [3] O. A. von Lillienfeld and A. Tkatchenko, J. Chem. Phys. 132 234109 (2010)

  16. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  17. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  18. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  19. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  20. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  1. Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nugent, Peter E.; Simonson, J. Michael

    2011-10-24

    This report is based on the Department of Energy (DOE) Workshop on “Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery” that was held at the Bethesda Marriott in Maryland on October 24-25, 2011. The workshop brought together leading researchers from the Basic Energy Sciences (BES) facilities and Advanced Scientific Computing Research (ASCR). The workshop was co-sponsored by these two Offices to identify opportunities and needs for data analysis, ownership, storage, mining, provenance and data transfer at light sources, neutron sources, microscopy centers and other facilities. Their charge was to identify current and anticipated issues inmore » the acquisition, analysis, communication and storage of experimental data that could impact the progress of scientific discovery, ascertain what knowledge, methods and tools are needed to mitigate present and projected shortcomings and to create the foundation for information exchanges and collaboration between ASCR and BES supported researchers and facilities. The workshop was organized in the context of the impending data tsunami that will be produced by DOE’s BES facilities. Current facilities, like SLAC National Accelerator Laboratory’s Linac Coherent Light Source, can produce up to 18 terabytes (TB) per day, while upgraded detectors at Lawrence Berkeley National Laboratory’s Advanced Light Source will generate ~10TB per hour. The expectation is that these rates will increase by over an order of magnitude in the coming decade. The urgency to develop new strategies and methods in order to stay ahead of this deluge and extract the most science from these facilities was recognized by all. The four focus areas addressed in this workshop were: Workflow Management - Experiment to Science: Identifying and managing the data path from experiment to publication. Theory and Algorithms: Recognizing the need for new tools for computation at scale, supporting large data sets and realistic theoretical models. Visualization and Analysis: Supporting near-real-time feedback for experiment optimization and new ways to extract and communicate critical information from large data sets. Data Processing and Management: Outlining needs in computational and communication approaches and infrastructure needed to handle unprecedented data volume and information content. It should be noted that almost all participants recognized that there were unlikely to be any turn-key solutions available due to the unique, diverse nature of the BES community, where research at adjacent beamlines at a given light source facility often span everything from biology to materials science to chemistry using scattering, imaging and/or spectroscopy. However, it was also noted that advances supported by other programs in data research, methodologies, and tool development could be implemented on reasonable time scales with modest effort. Adapting available standard file formats, robust workflows, and in-situ analysis tools for user facility needs could pay long-term dividends. Workshop participants assessed current requirements as well as future challenges and made the following recommendations in order to achieve the ultimate goal of enabling transformative science in current and future BES facilities: Theory and analysis components should be integrated seamlessly within experimental workflow. Develop new algorithms for data analysis based on common data formats and toolsets. Move analysis closer to experiment. Move the analysis closer to the experiment to enable real-time (in-situ) streaming capabilities, live visualization of the experiment and an increase of the overall experimental efficiency. Match data management access and capabilities with advancements in detectors and sources. Remove bottlenecks, provide interoperability across different facilities/beamlines and apply forefront mathematical techniques to more efficiently extract science from the experiments. This workshop report examines and reviews the status of several BES facilities and highlights the successes and shortcomings of the current data and communication pathways for scientific discovery. It then ascertains what methods and tools are needed to mitigate present and projected data bottlenecks to science over the next 10 years. The goal of this report is to create the foundation for information exchanges and collaborations among ASCR and BES supported researchers, the BES scientific user facilities, and ASCR computing and networking facilities. To jumpstart these activities, there was a strong desire to see a joint effort between ASCR and BES along the lines of the highly successful Scientific Discovery through Advanced Computing (SciDAC) program in which integrated teams of engineers, scientists and computer scientists were engaged to tackle a complete end-to-end workflow solution at one or more beamlines, to ascertain what challenges will need to be addressed in order to handle future increases in data« less

  2. Computer program user's manual for FIREFINDER digital topographic data verification library dubbing system

    NASA Astrophysics Data System (ADS)

    Ceres, M.; Heselton, L. R., III

    1981-11-01

    This manual describes the computer programs for the FIREFINDER Digital Topographic Data Verification-Library-Dubbing System (FFDTDVLDS), and will assist in the maintenance of these programs. The manual contains detailed flow diagrams and associated descriptions for each computer program routine and subroutine. Complete computer program listings are also included. This information should be used when changes are made in the computer programs. The operating system has been designed to minimize operator intervention.

  3. AV Programs for Computer Know-How.

    ERIC Educational Resources Information Center

    Mandell, Phyllis Levy

    1985-01-01

    Lists 44 audiovisual programs (most released between 1983 and 1984) grouped in seven categories: computers in society, introduction to computers, computer operations, languages and programing, computer graphics, robotics, computer careers. Excerpts from "School Library Journal" reviews, price, and intended grade level are included. Names…

  4. Guidelines for development of NASA (National Aeronautics and Space Administration) computer security training programs

    NASA Technical Reports Server (NTRS)

    Tompkins, F. G.

    1983-01-01

    The report presents guidance for the NASA Computer Security Program Manager and the NASA Center Computer Security Officials as they develop training requirements and implement computer security training programs. NASA audiences are categorized based on the computer security knowledge required to accomplish identified job functions. Training requirements, in terms of training subject areas, are presented for both computer security program management personnel and computer resource providers and users. Sources of computer security training are identified.

  5. Computer programs: Operational and mathematical, a compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.

  6. Computer Literacy Project. A General Orientation in Basic Computer Concepts and Applications.

    ERIC Educational Resources Information Center

    Murray, David R.

    This paper proposes a two-part, basic computer literacy program for university faculty, staff, and students with no prior exposure to computers. The program described would introduce basic computer concepts and computing center service programs and resources; provide fundamental preparation for other computer courses; and orient faculty towards…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elbridge Gerry Puckett

    All of the work conducted under the auspices of DE-FC02-01ER25473 was characterized by exceptionally close collaboration with researchers at the Lawrence Berkeley National Laboratory (LBNL). This included having one of my graduate students - Sarah Williams - spend the summer working with Dr. Ann Almgren a staff scientist in the Center for Computational Sciences and Engineering (CCSE) which is a part of the National Energy Research Supercomputer Center (NERSC) at LBNL. As a result of this visit Sarah decided to work on a problem suggested by Dr. John Bell the head of CCSE for her PhD thesis, which she finishedmore » in June 2007. Writing a PhD thesis while working at one of the University of California (UC) managed DOE laboratories is a long established tradition at the University of California and I have always encouraged my students to consider doing this. For example, in 2000 one of my graduate students - Matthew Williams - finished his PhD thesis while working with Dr. Douglas Kothe at the Los Alamos National Laboratory (LANL). Matt is now a staff scientist in the Diagnostic Applications Group in the Applied Physics Division at LANL. Another one of my graduate students - Christopher Algieri - who was partially supported with funds from DE-FC02-01ER25473 wrote am MS Thesis that analyzed and extended work published by Dr. Phil Colella and his colleagues in 1998. Dr. Colella is the head of the Applied Numerical Algorithms Group (ANAG) in the National Energy Research Supercomputer Center at LBNL and is the lead PI for the APDEC ISIC which was comprised of several National Laboratory research groups and at least five University PI's at five different universities. Chris Algieri is now employed as a staff member in Dr. Bill Collins' research group at LBNL developing computational models for climate change research. Bill Collins was recently hired at LBNL to start and be the Head of the Climate Science Department in the Earth Sciences Division at LBNL. Prior to this he had been a Deputy Section Head at the National Center for Atmospheric Research in Colorado. My understanding is that Chris Algieri is the first person that Bill hired after coming to LBNL. The plan is that Chris Algieri will finish his PhD thesis while employed as a staff scientist in Bill's group. Both Sarah and Chris were supported in part with funds from DE-FC02-01ER25473. In Sarah's case she received support both while at U.C. Davis (UCD) taking classes and writing an MS thesis and during some of the time she was living in Berkeley, working at LBNL and finishing her PhD thesis. In Chris' case he was at U.C. Davis during the entire time he received support from DE-FC02-01ER25473. More specific details of their work are included in the report below. Finally my own research conducted under the auspices of DE-FC02-01ER25473 either involved direct collaboration with researchers at LBNL - Phil Colella and Peter Schwartz who is a member of Phil's Applied Numerical Algorithms Group - or was on problems that are closely related to research that has been and continues to be conducted by researchers at LBNL. Specific details of this work can be found below. Finally, I would like to note that the work conducted by my students and me under the auspices of this contract is closely related to work that I have performed with funding from my DOE MICS contract DE-FC02-03ER25579 'Development of High-Order Accurate Interface Tracking Algorithms and Improved Constitutive Models for Problems in Continuum Mechanics with Applications to Jetting' and with my CoPI on that grant Professor Greg Miller of the Department of Applied Science at UCD. In theory I tried to use funds from the SciDAC grant DE-FC02-01ER25473 to support work that directly involved implementing algorithms developed by my research group at U.C. Davis in software that was developed and is maintained by my SciDAC CoPI's at LBNL.« less

  8. Information Security: Federal Guidance Needed to Address Control Issues With Implementing Cloud Computing

    DTIC Science & Technology

    2010-05-01

    Figure 2: Cloud Computing Deployment Models 13 Figure 3: NIST Essential Characteristics 14 Figure 4: NASA Nebula Container 37...Access Computing Environment (RACE) program, the National Aeronautics and Space Administration’s (NASA) Nebula program, and the Department of...computing programs: the DOD’s RACE program; NASA’s Nebula program; and Department of Transportation’s CARS program, including lessons learned related

  9. An analysis of the 70-meter antenna hydrostatic bearing by means of computer simulation

    NASA Technical Reports Server (NTRS)

    Bartos, R. D.

    1993-01-01

    Recently, the computer program 'A Computer Solution for Hydrostatic Bearings with Variable Film Thickness,' used to design the hydrostatic bearing of the 70-meter antennas, was modified to improve the accuracy with which the program predicts the film height profile and oil pressure distribution between the hydrostatic bearing pad and the runner. This article presents a description of the modified computer program, the theory upon which the computer program computations are based, computer simulation results, and a discussion of the computer simulation results.

  10. Computer Programs for Library Operations; Results of a Survey Conducted Between Fall 1971 and Spring 1972.

    ERIC Educational Resources Information Center

    Liberman, Eva; And Others

    Many library operations involving large data banks lend themselves readily to computer operation. In setting up library computer programs, in changing or expanding programs, cost in programming and time delays could be substantially reduced if the programmers had access to library computer programs being used by other libraries, providing similar…

  11. 32 CFR 701.125 - Computer matching program.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 5 2012-07-01 2012-07-01 false Computer matching program. 701.125 Section 701... OF THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.125 Computer matching program. The DPO has responsibility for coordinating the approval of DOD's participation in Computer Matching...

  12. 32 CFR 701.125 - Computer matching program.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 5 2014-07-01 2014-07-01 false Computer matching program. 701.125 Section 701... OF THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.125 Computer matching program. The DPO has responsibility for coordinating the approval of DOD's participation in Computer Matching...

  13. 32 CFR 701.125 - Computer matching program.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 5 2013-07-01 2013-07-01 false Computer matching program. 701.125 Section 701... OF THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.125 Computer matching program. The DPO has responsibility for coordinating the approval of DOD's participation in Computer Matching...

  14. 32 CFR 701.125 - Computer matching program.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Computer matching program. 701.125 Section 701... OF THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.125 Computer matching program. The DPO has responsibility for coordinating the approval of DOD's participation in Computer Matching...

  15. 32 CFR 701.125 - Computer matching program.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 5 2011-07-01 2011-07-01 false Computer matching program. 701.125 Section 701... OF THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.125 Computer matching program. The DPO has responsibility for coordinating the approval of DOD's participation in Computer Matching...

  16. Gender differences in the use of computers, programming, and peer interactions in computer science classrooms

    NASA Astrophysics Data System (ADS)

    Stoilescu, Dorian; Egodawatte, Gunawardena

    2010-12-01

    Research shows that female and male students in undergraduate computer science programs view computer culture differently. Female students are interested more in the use of computers than in doing programming, whereas male students see computer science mainly as a programming activity. The overall purpose of our research was not to find new definitions for computer science culture but to see how male and female students see themselves involved in computer science practices, how they see computer science as a successful career, and what they like and dislike about current computer science practices. The study took place in a mid-sized university in Ontario. Sixteen students and two instructors were interviewed to get their views. We found that male and female views are different on computer use, programming, and the pattern of student interactions. Female and male students did not have any major issues in using computers. In computing programming, female students were not so involved in computing activities whereas male students were heavily involved. As for the opinions about successful computer science professionals, both female and male students emphasized hard working, detailed oriented approaches, and enjoying playing with computers. The myth of the geek as a typical profile of successful computer science students was not found to be true.

  17. Computer program for the computation of total sediment discharge by the modified Einstein procedure

    USGS Publications Warehouse

    Stevens, H.H.

    1985-01-01

    Two versions of a computer program to compute total sediment discharge by the modified Einstein procedure are presented. The FORTRAN 77 language version is for use on the PRIME computer, and the BASIC language version is for use on most microcomputers. The program contains built-in limitations and input-output options that closely follow the original modified Einstein procedure. Program documentation and listings of both versions of the program are included. (USGS)

  18. Programming the Navier-Stokes computer: An abstract machine model and a visual editor

    NASA Technical Reports Server (NTRS)

    Middleton, David; Crockett, Tom; Tomboulian, Sherry

    1988-01-01

    The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.

  19. Computer Electronics. Florida Vocational Program Guide.

    ERIC Educational Resources Information Center

    University of South Florida, Tampa. Dept. of Adult and Vocational Education.

    This packet contains a program guide and Career Merit Achievement Plan (Career MAP) for the implementation of a computer electronics technology (computer service technician) program in Florida secondary and postsecondary schools. The program guide describes the program content and structure, provides a program description, lists job titles under…

  20. Another Program For Generating Interactive Graphics

    NASA Technical Reports Server (NTRS)

    Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl; hide

    1991-01-01

    VAX/Ultrix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. When used throughout company for wide range of applications, makes both application program and computer seem transparent, with noticeable improvements in learning curve. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC's and PS/2 computers running AIX, and HP 9000 S

  1. Adolescents' Chunking of Computer Programs.

    ERIC Educational Resources Information Center

    Magliaro, Susan; Burton, John K.

    To investigate what children learn during computer programming instruction, students attending a summer computer camp were asked to recall either single lines or chunks of computer programs from either coherent or scrambled programs. The 16 subjects, ages 12 to 17, were divided into three instructional groups: (1) beginners, who were taught to…

  2. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    ERIC Educational Resources Information Center

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  3. 32 CFR 505.13 - Computer Matching Agreement Program.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 3 2013-07-01 2013-07-01 false Computer Matching Agreement Program. 505.13... AUTHORITIES AND PUBLIC RELATIONS ARMY PRIVACY ACT PROGRAM § 505.13 Computer Matching Agreement Program. (a) General provisions. (1) Pursuant to the Privacy Act and this part, DA records may be subject to computer...

  4. 32 CFR 505.13 - Computer Matching Agreement Program.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 3 2012-07-01 2009-07-01 true Computer Matching Agreement Program. 505.13... AUTHORITIES AND PUBLIC RELATIONS ARMY PRIVACY ACT PROGRAM § 505.13 Computer Matching Agreement Program. (a) General provisions. (1) Pursuant to the Privacy Act and this part, DA records may be subject to computer...

  5. 32 CFR 505.13 - Computer Matching Agreement Program.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 3 2014-07-01 2014-07-01 false Computer Matching Agreement Program. 505.13... AUTHORITIES AND PUBLIC RELATIONS ARMY PRIVACY ACT PROGRAM § 505.13 Computer Matching Agreement Program. (a) General provisions. (1) Pursuant to the Privacy Act and this part, DA records may be subject to computer...

  6. 77 FR 13388 - Treasury Inspector General for Tax Administration; Privacy Act of 1974: Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-06

    ...: Computer Matching Program AGENCY: Treasury Inspector General for Tax Administration, Treasury. ACTION... Internal Revenue Service (IRS) concerning the conduct of TIGTA's computer matching program. DATES... INFORMATION: TIGTA's computer matching program assists in the detection and deterrence of fraud, waste, and...

  7. 78 FR 50146 - Privacy Act of 1974: Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-16

    ... DEPARTMENT OF VETERANS AFFAIRS Privacy Act of 1974: Computer Matching Program AGENCY: Department of Veterans Affairs. ACTION: Notice of Computer Match Program. SUMMARY: Pursuant to 5 U.S.C. 552a... to conduct a computer matching program with the Internal Revenue Service (IRS). Data from the...

  8. 76 FR 47299 - Privacy Act of 1974: Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-04

    ... DEPARTMENT OF VETERANS AFFAIRS Privacy Act of 1974: Computer Matching Program AGENCY: Department of Veterans Affairs. ACTION: Notice of Computer Match Program. SUMMARY: Pursuant to 5 U.S.C. 552a... to conduct a computer matching program with the Internal Revenue Service (IRS). Data from the...

  9. 32 CFR 505.13 - Computer Matching Agreement Program.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 3 2011-07-01 2009-07-01 true Computer Matching Agreement Program. 505.13... AUTHORITIES AND PUBLIC RELATIONS ARMY PRIVACY ACT PROGRAM § 505.13 Computer Matching Agreement Program. (a) General provisions. (1) Pursuant to the Privacy Act and this part, DA records may be subject to computer...

  10. 32 CFR 310.53 - Computer matching agreements (CMAs).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 2 2013-07-01 2013-07-01 false Computer matching agreements (CMAs). 310.53... (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.53 Computer.... (3) Justification and expected results. Explain why computer matching as opposed to some other...

  11. 32 CFR 310.53 - Computer matching agreements (CMAs).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 2 2014-07-01 2014-07-01 false Computer matching agreements (CMAs). 310.53... (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.53 Computer.... (3) Justification and expected results. Explain why computer matching as opposed to some other...

  12. 32 CFR 310.53 - Computer matching agreements (CMAs).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 2 2012-07-01 2012-07-01 false Computer matching agreements (CMAs). 310.53... (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.53 Computer.... (3) Justification and expected results. Explain why computer matching as opposed to some other...

  13. 32 CFR 310.53 - Computer matching agreements (CMAs).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 2 2010-07-01 2010-07-01 false Computer matching agreements (CMAs). 310.53... (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.53 Computer.... (3) Justification and expected results. Explain why computer matching as opposed to some other...

  14. 32 CFR 310.53 - Computer matching agreements (CMAs).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 2 2011-07-01 2011-07-01 false Computer matching agreements (CMAs). 310.53... (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.53 Computer.... (3) Justification and expected results. Explain why computer matching as opposed to some other...

  15. Computer programs for computing particle-size statistics of fluvial sediments

    USGS Publications Warehouse

    Stevens, H.H.; Hubbell, D.W.

    1986-01-01

    Two versions of computer programs for inputing data and computing particle-size statistics of fluvial sediments are presented. The FORTRAN 77 language versions are for use on the Prime computer, and the BASIC language versions are for use on microcomputers. The size-statistics program compute Inman, Trask , and Folk statistical parameters from phi values and sizes determined for 10 specified percent-finer values from inputed size and percent-finer data. The program also determines the percentage gravel, sand, silt, and clay, and the Meyer-Peter effective diameter. Documentation and listings for both versions of the programs are included. (Author 's abstract)

  16. NEMAR plotting computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program which generates CalComp plots of trajectory parameters is examined. The trajectory parameters are calculated and placed on a data file by the Near Earth Mission Analysis Routine computer program. The plot program accesses the data file and generates the plots as defined by inputs to the plot program. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included. Although this plot program utilizes a random access data file, a data file of the same type and formatted in 102 numbers per record could be generated by any computer program and used by this plot program.

  17. User's guide to the NOZL3D and NOZLIC computer programs

    NASA Technical Reports Server (NTRS)

    Thomas, P. D.

    1980-01-01

    Complete FORTRAN listings and running instructions are given for a set of computer programs that perform an implicit numerical solution to the unsteady Navier-Stokes equations to predict the flow characteristics and performance of nonaxisymmetric nozzles. The set includes the NOZL3D program, which performs the flow computations; the NOZLIC program, which sets up the flow field initial conditions for general nozzle configurations, and also generates the computational grid for simple two dimensional and axisymmetric configurations; and the RGRIDD program, which generates the computational grid for complicated three dimensional configurations. The programs are designed specifically for the NASA-Langley CYBER 175 computer, and employ auxiliary disk files for primary data storage. Input instructions and computed results are given for four test cases that include two dimensional, three dimensional, and axisymmetric configurations.

  18. OASIS connections: results from an evaluation study.

    PubMed

    Czaja, Sara J; Lee, Chin Chin; Branham, Janice; Remis, Peggy

    2012-10-01

    The objectives of this study were to evaluate a community-based basic computer and Internet training program designed for older adults, provide recommendations for program refinement, and gather preliminary information on program sustainability. The program was developed by the OASIS Institute, a nonprofit agency serving older adults and implemented in 4 cities by community trainers across the United States. One hundred and ninety-six adults aged 40-90 years were assigned to the training or a wait-list control group. Knowledge of computers and the Internet, attitudes toward computers, and computer/Internet use were assessed at baseline, posttraining, and 3 months posttraining. The program was successful in increasing the computer/Internet skills of the trainees. The data indicated a significant increase in computer and Internet knowledge and comfort with computers among those who received the training. Further, those who completed the course reported an increase in both computer and Internet use 3 months posttraining. The findings indicate that a community-based computer and Internet training program delivered by community instructors can be effective in terms of increasing computer and Internet skills and comfort with computer technology among older adults.

  19. Introducing Hospital Staff to Computer Concepts: An Educational Program

    PubMed Central

    Kaplan, Bonnie

    1981-01-01

    An in-house computer education program for hospital staff ran for two years at a large, metropolitan hospital. The program drew physicians, administrators, department heads, secretaries, technicians, and data managers to courses, seminars, and workshops on medical computing. Two courses, an introduction to computer concepts and a programming course, are described and evaluated.

  20. 78 FR 29748 - Privacy Act of 1974; Computer Matching Program Between the Department of Education and the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-21

    ... DEPARTMENT OF EDUCATION Privacy Act of 1974; Computer Matching Program Between the Department of... document provides notice of the continuation of a computer matching program between the Department of... 5301, the Department of Justice and the Department of Education implemented a computer matching program...

  1. 75 FR 53005 - Privacy Act of 1974, as amended; Notice of Computer Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-30

    ... notice of its renewal of an ongoing computer-matching program with the Social Security Administration... computer-matching program with the Committee on Homeland Security and Governmental Affairs of the Senate... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as amended; Notice of Computer Matching Program...

  2. 78 FR 34678 - Privacy Act of 1974, as Amended; Notice of Computer Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... notice of its renewal of an ongoing computer-matching program with the Social Security Administration... computer-matching program with the Committee on Homeland Security and Governmental Affairs of the Senate... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as Amended; Notice of Computer Matching Program...

  3. 75 FR 8311 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-24

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, DoD. ACTION: Notice of a... hereby giving notice to the record subjects of a computer matching program between the Department of... conduct a computer matching program between the agencies. The purpose of this agreement is to verify an...

  4. 76 FR 50198 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-12

    ... DEPARTMENT OF EDUCATION Privacy Act of 1974; Computer Matching Program AGENCY: Office of the Inspector General, U.S. Department of Education. ACTION: Notice of computer matching between the U.S... conduct of computer matching programs, notice is hereby given of the establishment of a computer matching...

  5. Use of CYBER 203 and CYBER 205 computers for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1983-01-01

    Experiences are discussed for modifying two three-dimensional transonic flow computer programs (FLO 22 and FLO 27) for use on the CDC CYBER 203 computer system. Both programs were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine: leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, vectorizing parts of the existing algorithm in the program, and incorporating a vectorizable algorithm (ZEBRA I or ZEBRA II) in the program. Comparison runs of the programs were made on CDC CYBER 175. CYBER 203, and two pipe CDC CYBER 205 computer systems.

  6. A Microcomputer-Based Computer Science Program.

    ERIC Educational Resources Information Center

    Compeau, Larry D.

    1984-01-01

    Examines the use of the microcomputer in computer science programs as an alternative to time-sharing computers at North Country Community College. Discusses factors contributing to the program's success, security problems, outside application possibilities, and program implementation concerns. (DMM)

  7. NASA's computer science research program

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1983-01-01

    Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.

  8. Identification of Cognitive Processes of Effective and Ineffective Students during Computer Programming

    ERIC Educational Resources Information Center

    Renumol, V. G.; Janakiram, Dharanipragada; Jayaprakash, S.

    2010-01-01

    Identifying the set of cognitive processes (CPs) a student can go through during computer programming is an interesting research problem. It can provide a better understanding of the human aspects in computer programming process and can also contribute to the computer programming education in general. The study identified the presence of a set of…

  9. A Program To Develop through LOGO the Computer Self-Confidence of Seventh Grade Low-Achieving Girls.

    ERIC Educational Resources Information Center

    Angell, Marion D.

    This practicum report describes the development of a program designed to improve self-confidence in low-achieving seventh grade girls towards computers. The questionnaire "My Feelings Towards Computers" was used for pre- and post-comparisons. Students were introduced to the computer program LOGO, were taught to compose programs using the…

  10. Advanced Computer Aids in the Planning and Execution of Air Warfare and Ground Strike Operations: Conference Proceedings, Meeting of the Avionics Panels of AGARD (51st) Held in Kongsberg, Norway on 12-16 May 1986

    DTIC Science & Technology

    1986-02-01

    the area of Artificial Intelligence (At). DARPA’s Strategic Computing Program 13 developing an At ýtchnology base upon which several applications...technologies with the Strategic Computing Program . In late 1983 the Strategic Computing Program (SCP) wes announced. The program was organizsd to develop...solving a resource allocation problem. The remainder of this paper will discuss the TEMPLAR progeam as it relates to the Strategic Computing Program

  11. Use of a Computer Language in Teaching Dynamic Programming. Final Report.

    ERIC Educational Resources Information Center

    Trimble, C. J.; And Others

    Most optimization problems of any degree of complexity must be solved using a computer. In the teaching of dynamic programing courses, it is often desirable to use a computer in problem solution. The solution process involves conceptual formulation and computational Solution. Generalized computer codes for dynamic programing problem solution…

  12. A Computer Program for Crystal Drawing.

    ERIC Educational Resources Information Center

    Dutch, Steven I.

    1981-01-01

    Described is a computer program which accepts face data, performs all necessary symmetry operations, and produces a drawing of the resulting crystal. The program shortens computing time to make it suitable for online teaching use or for use in small computers. (Author/DC)

  13. Programming Support Library (PSL). Users Manual.

    DTIC Science & Technology

    1978-05-01

    which provides the tools to organize, implement, and control computer program develop- ment. This involves the support of the actual programming process...provides the tools toorganize, implement, and control computer program development. The system is designed specifically to support top-down development...Structured Programming are finding increasing application in the computing community. Structured programs are, however, difficult to write in

  14. Enhancing Digital Fluency through a Training Program for Creative Problem Solving Using Computer Programming

    ERIC Educational Resources Information Center

    Kim, SugHee; Chung, KwangSik; Yu, HeonChang

    2013-01-01

    The purpose of this paper is to propose a training program for creative problem solving based on computer programming. The proposed program will encourage students to solve real-life problems through a creative thinking spiral related to cognitive skills with computer programming. With the goal of enhancing digital fluency through this proposed…

  15. Introduction to the Atari Computer. A Program Written in the Pilot Programming Language.

    ERIC Educational Resources Information Center

    Schlenker, Richard M.

    Designed to be an introduction to the Atari microcomputers for beginners, the interactive computer program listed in this document is written in the Pilot programing language. Instructions are given for entering and storing the program in the computer memory for use by students. (MES)

  16. Computer Science 205. Interim Guide, 1983.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg.

    This guide to a 4-unit, required high school computer science course emphasizes problem solving and computer programming and is designed for use with a variety of hardware configurations and programming languages. An overview covers the program rationale, goals and objectives, program design and description, program implementation, time allotment,…

  17. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  18. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  19. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  20. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  1. Low-Budget Computer Programming in Your School (An Alternative to the Cost of Large Computers). Illinois Series on Educational Applications of Computers. No. 14.

    ERIC Educational Resources Information Center

    Dennis, J. Richard; Thomson, David

    This paper is concerned with a low cost alternative for providing computer experience to secondary school students. The brief discussion covers the programmable calculator and its relevance for teaching the concepts and the rudiments of computer programming and for computer problem solving. A list of twenty-five programming activities related to…

  2. Simulation of the stress computation in shells

    NASA Technical Reports Server (NTRS)

    Salama, M.; Utku, S.

    1978-01-01

    A self-teaching computer program is described, whereby the stresses in thin shells can be computed with good accuracy using the best fit approach. The program is designed for use in interactive game mode to allow the structural engineer to learn about (1) the major sources of difficulties and associated errors in the computation of stresses in thin shells, (2) possible ways to reduce the errors, and (3) trade-off between computational cost and accuracy. Included are derivation of the computational approach, program description, and several examples illustrating the program usage.

  3. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 3: Design philosophy and programming details

    USGS Publications Warehouse

    Torak, L.J.

    1993-01-01

    A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.

  4. Designing and Creating Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    McMeen, George R.

    Designed to encourage the use of a defined methodology and careful planning in creating computer-assisted instructional programs, this paper describes the instructional design process, compares computer-assisted instruction (CAI) and programmed instruction (PI), and discusses pragmatic concerns in computer programming. Topics addressed include:…

  5. Program For Generating Interactive Displays

    NASA Technical Reports Server (NTRS)

    Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl; hide

    1991-01-01

    Sun/Unix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. Plus viewed as productivity tool for application developers and application end users, who benefit from resultant consistent and well-designed user interface sheltering them from intricacies of computer. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC and PS/2 compute

  6. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    DOEpatents

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  7. Causal Attributions of Success and Failure Made by Undergraduate Students in an Introductory-Level Computer Programming Course

    ERIC Educational Resources Information Center

    Hawi, N.

    2010-01-01

    The purpose of this research is to identify the causal attributions of business computing students in an introductory computer programming course, in the computer science department at Notre Dame University, Louaize. Forty-five male and female undergraduates who completed the computer programming course that extended for a 13-week semester…

  8. SuperPILOT: A Comprehensive Computer-Assisted Instruction Programming Language for the Apple II Computer.

    ERIC Educational Resources Information Center

    Falleur, David M.

    This presentation describes SuperPILOT, an extended version of Apple PILOT, a programming language for developing computer-assisted instruction (CAI) with the Apple II computer that includes the features of its early PILOT (Programmed Inquiry, Learning or Teaching) ancestors together with new features that make use of the Apple computer's advanced…

  9. DORMAN computer program (study 2.5). Volume 2: User's guide and programmer's guide. [development of data bank for computerized information storage of NASA programs

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1973-01-01

    The DORMAN program was developed to create and modify a data bank containing data decks which serve as input to the DORCA Computer Program. Via a remote terminal a user can access the bank, extract any data deck, modify that deck, output the modified deck to be input to the DORCA program, and save the modified deck in the data bank. This computer program is an assist in the utilization of the DORCA program. The program is dimensionless and operates almost entirely in integer mode. The program was developed on the CDC 6400/7600 complex for implementation on a UNIVAC 1108 computer.

  10. Computer programs for thermodynamic and transport properties of hydrogen

    NASA Technical Reports Server (NTRS)

    Hall, W. J.; Mc Carty, R. D.; Roder, H. M.

    1968-01-01

    Computer program subroutines provide the thermodynamic and transport properties of hydrogen in tabular form. The programs provide 18 combinations of input and output variables. This program is written in FORTRAN 4 for use on the IBM 7044 or CDC 3600 computers.

  11. The Compubus Evaluation.

    ERIC Educational Resources Information Center

    Klimko, Ivan P.

    The Computer Bus Program (Compubus Program) of the Modesto City Schools (California) was evaluated to determine the effectiveness of the computer literacy program for fourth, fifth, and sixth grade students at eight elementary compensatory education schools. The program provided instruction on computer vocabulary, knowledge, and applications in a…

  12. Technique to eliminate computational instability in multibody simulations employing the Lagrange multiplier

    NASA Technical Reports Server (NTRS)

    Watts, G.

    1992-01-01

    A programming technique to eliminate computational instability in multibody simulations that use the Lagrange multiplier is presented. The computational instability occurs when the attached bodies drift apart and violate the constraints. The programming technique uses the constraint equation, instead of integration, to determine the coordinates that are not independent. Although the equations of motion are unchanged, a complete derivation of the incorporation of the Lagrange multiplier into the equation of motion for two bodies is presented. A listing of a digital computer program which uses the programming technique to eliminate computational instability is also presented. The computer program simulates a solid rocket booster and parachute connected by a frictionless swivel.

  13. ICASE Computer Science Program

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Institute for Computer Applications in Science and Engineering computer science program is discussed in outline form. Information is given on such topics as problem decomposition, algorithm development, programming languages, and parallel architectures.

  14. Human operator identification model and related computer programs

    NASA Technical Reports Server (NTRS)

    Kessler, K. M.; Mohr, J. N.

    1978-01-01

    Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.

  15. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 2 2012-07-01 2012-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify the...

  16. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 2 2014-07-01 2014-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify the...

  17. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 2 2013-07-01 2013-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify the...

  18. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 2 2010-07-01 2010-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify the...

  19. 32 CFR 310.52 - Computer matching publication and review requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 2 2011-07-01 2011-07-01 false Computer matching publication and review... OF DEFENSE (CONTINUED) PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review requirements. (a) DoD Components shall identify the...

  20. Program Aids Specification Of Multiple-Block Grids

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.; Mccann, K. M.

    1993-01-01

    3DPREP computer program aids specification of multiple-block computational grids. Highly interactive graphical preprocessing program designed for use on powerful graphical scientific computer workstation. Divided into three main parts, each corresponding to principal graphical-and-alphanumerical display. Relieves user of some burden of collecting and formatting many data needed to specify blocks and grids, and prepares input data for NASA's 3DGRAPE grid-generating computer program.

  1. Data Processing: Fifteen Suggestions for Computer Training in Your Business Education Classes.

    ERIC Educational Resources Information Center

    Barr, Lowell L.

    1980-01-01

    Presents 15 suggestions for training business education students in the use of computers. Suggestions involve computer language, method of presentation, laboratory time, programing assignments, instructions and handouts, problem solving, deadlines, reviews, programming concepts, programming logic, documentation, and defensive programming. (CT)

  2. 78 FR 15730 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... 1974; Computer Matching Program AGENCY: U.S. Citizenship and Immigration Services, Department of... Matching Program between the Department of Homeland Security, U.S. Citizenship and Immigration Services and... computer matching program between the Department of Homeland Security, U.S. Citizenship and Immigration...

  3. High performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  4. Algorithms for computations of Loday algebras' invariants

    NASA Astrophysics Data System (ADS)

    Hussain, Sharifah Kartini Said; Rakhimov, I. S.; Basri, W.

    2017-04-01

    The paper is devoted to applications of some computer programs to study structural determination of Loday algebras. We present how these computer programs can be applied in computations of various invariants of Loday algebras and provide several computer programs in Maple to verify Loday algebras' identities, the isomorphisms between the algebras, as a special case, to describe the automorphism groups, centroids and derivations.

  5. The Effects of a Robot Game Environment on Computer Programming Education for Elementary School Students

    ERIC Educational Resources Information Center

    Shim, Jaekwoun; Kwon, Daiyoung; Lee, Wongyu

    2017-01-01

    In the past, computer programming was perceived as a task only carried out by computer scientists; in the 21st century, however, computer programming is viewed as a critical and necessary skill that everyone should learn. In order to improve teaching of problem-solving abilities in a computing environment, extensive research is being done on…

  6. Processing Device for High-Speed Execution of an Xrisc Computer Program

    NASA Technical Reports Server (NTRS)

    Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)

    2016-01-01

    A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.

  7. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  8. Structured Problem Solving and the Basic Graphic Methods within a Total Quality Leadership Setting: Case Study

    DTIC Science & Technology

    1992-02-01

    develop,, and maintains computer programs for the Department of the Navy. It provides life cycle support for over 50 computer programs installed at over...the computer programs . Table 4 presents a list of possible product or output measures of functionality for ACDS Block 0 programs . Examples of output...were identified as important "causes" of process performance. Functionality of the computer programs was the result or "effect" of the combination of

  9. Space shuttle solid rocket booster recovery system definition. Volume 3: SRB water impact loads computer program, user's manual

    NASA Technical Reports Server (NTRS)

    1973-01-01

    This user's manual describes the FORTRAN IV computer program developed to compute the total vertical load, normal concentrated pressure loads, and the center of pressure of typical SRB water impact slapdown pressure distributions specified in the baseline configuration. The program prepares the concentrated pressure load information in punched card format suitable for input to the STAGS computer program. In addition, the program prepares for STAGS input the inertia reacting loads to the slapdown pressure distributions.

  10. Computer programs for forward and inverse modeling of acoustic and electromagnetic data

    USGS Publications Warehouse

    Ellefsen, Karl J.

    2011-01-01

    A suite of computer programs was developed by U.S. Geological Survey personnel for forward and inverse modeling of acoustic and electromagnetic data. This report describes the computer resources that are needed to execute the programs, the installation of the programs, the program designs, some tests of their accuracy, and some suggested improvements.

  11. Scratch: Multimedia Programming Environment for Young Gifted Learners

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2011-01-01

    Despite the educational benefits, computer programming has not been adopted in the current K-12 education as much as it could have been. One of the reasons for the low adoption of computer programming in K-12 education is the time it takes for (especially young) students to learn computer programming using a text-based programming language, which…

  12. Enhancing programming logic thinking using analogy mapping

    NASA Astrophysics Data System (ADS)

    Sukamto, R. A.; Megasari, R.

    2018-05-01

    Programming logic thinking is the most important competence for computer science students. However, programming is one of the difficult subject in computer science program. This paper reports our work about enhancing students' programming logic thinking using Analogy Mapping for basic programming subject. Analogy Mapping is a computer application which converts source code into analogies images. This research used time series evaluation and the result showed that Analogy Mapping can enhance students' programming logic thinking.

  13. Software Reviews. Programs Worth a Second Look.

    ERIC Educational Resources Information Center

    Schneider, Roxanne; Eiser, Leslie

    1989-01-01

    Reviewed are three computer software packages for use in middle/high school classrooms. Included are "MacWrite II," a word-processing program for MacIntosh computers; "Super Story Tree," a word-processing program for Apple and IBM computers; and "Math Blaster Mystery," for IBM, Apple, and Tandy computers. (CW)

  14. An Interactive Computer-Based Training Program for Beginner Personal Computer Maintenance.

    ERIC Educational Resources Information Center

    Summers, Valerie Brooke

    A computer-assisted instructional program, which was developed for teaching beginning computer maintenance to employees of Unisys, covered external hardware maintenance, proper diskette care, making software backups, and electro-static discharge prevention. The procedure used in developing the program was based upon the Dick and Carey (1985) model…

  15. A Review of Models for Teacher Preparation Programs for Precollege Computer Science Education.

    ERIC Educational Resources Information Center

    Deek, Fadi P.; Kimmel, Howard

    2002-01-01

    Discusses the need for adequate precollege computer science education and focuses on the issues of teacher preparation programs and requirements needed to teach high school computer science. Presents models of teacher preparation programs and compares state requirements with Association for Computing Machinery (ACM) recommendations. (Author/LRW)

  16. DYNALIST II : A Computer Program for Stability and Dynamic Response Analysis of Rail Vehicle Systems : Volume 2. User's Manual.

    DOT National Transportation Integrated Search

    1975-02-01

    A methodology and a computer program, DYNALIST II, have been developed for computing the response of rail vehicle systems to sinusoidal or stationary random rail irregularities. The computer program represents an extension of the earlier DYNALIST pro...

  17. 78 FR 45513 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-29

    ...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... individual's privacy, and would result in additional delay in determining eligibility and, if applicable, the... Defense. NOTICE OF A COMPUTER MATCHING PROGRAM AMONG THE DEFENSE MANPOWER DATA CENTER, THE DEPARTMENT OF...

  18. 76 FR 1410 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-10

    ...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... administrative burden, constitute a greater intrusion of the individual's privacy, and would result in additional... Liaison Officer, Department of Defense. Notice of a Computer Matching Program Among the Defense Manpower...

  19. DYNALIST II : A Computer Program for Stability and Dynamic Response Analysis of Rail Vehicle Systems : Volume 1. Technical Report.

    DOT National Transportation Integrated Search

    1975-02-01

    A methodology and a computer program, DYNALIST II, have been developed for computing the response of rail vehicle systems to sinusoidal or stationary random rail irregularities. The computer program represents an extension of the earlier DYNALIST pro...

  20. A Research Program in Computer Technology. 1986 Annual Technical Report

    DTIC Science & Technology

    1989-08-01

    1986 (Annual Technical Report I July 1985 - June 1986 A Research Program in Computer Technology ISI/SR-87-178 U S C INFORMA-TION S C I EN C ES...Program in Computer Technology (Unclassified) 12. PERSONAL AUTHOR(S) 151 Research Staff 13a. TYPE OF REPORT 113b. TIME COVERED 14 DATE OF REPORT (Yeer...survivable networks 17. distributed processing, local networks, personal computers, workstation environment 18. computer acquisition, Strategic Computing 19

  1. FORTRAN manpower account program

    NASA Technical Reports Server (NTRS)

    Strand, J. N.

    1972-01-01

    Computer program for determining manpower costs for full time, part time, and contractor personnel is discussed. Twelve different tables resulting from computer output are described. Program is written in FORTRAN 4 for IBM 360/65 computer.

  2. Graphics and composite material computer program enhancements for SPAR

    NASA Technical Reports Server (NTRS)

    Farley, G. L.; Baker, D. J.

    1980-01-01

    User documentation is provided for additional computer programs developed for use in conjunction with SPAR. These programs plot digital data, simplify input for composite material section properties, and compute lamina stresses and strains. Sample problems are presented including execution procedures, program input, and graphical output.

  3. Design and performance analysis of solid-propellant rocket motors using a simplified computer program

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.

    1972-01-01

    An analysis and a computer program are presented which represent a compromise between the more sophisticated programs using precise burning geometric relations and the textbook type of solutions. The program requires approximately 900 computer cards including a set of 20 input data cards required for a typical problem. The computer operating time for a single configuration is approximately 1 minute and 30 seconds on the IBM 360 computer. About l minute and l5 seconds of the time is compilation time so that additional configurations input at the same time require approximately 15 seconds each. The program uses approximately 11,000 words on the IBM 360. The program is written in FORTRAN 4 and is readily adaptable for use on a number of different computers: IBM 7044, IBM 7094, and Univac 1108.

  4. Improved sonic-box computer program for calculating transonic aerodynamic loads on oscillating wings with thickness

    NASA Technical Reports Server (NTRS)

    Ruo, S. Y.

    1978-01-01

    A computer program was developed to account approximately for the effects of finite wing thickness in transonic potential flow over an oscillation wing of finite span. The program is based on the original sonic box computer program for planar wing which was extended to account for the effect of wing thickness. Computational efficiency and accuracy were improved and swept trailing edges were accounted for. Account for the nonuniform flow caused by finite thickness was made by application of the local linearization concept with appropriate coordinate transformation. A brief description of each computer routine and the applications of cubic spline and spline surface data fitting techniques used in the program are given, and the method of input was shown in detail. Sample calculations as well as a complete listing of the computer program listing are presented.

  5. Numerical nonlinear inelastic analysis of stiffened shells of revolution. Volume 3: Engineer's program manual for STARS-2P digital computer program

    NASA Technical Reports Server (NTRS)

    Svalbonas, V.; Levine, H.; Ogilvie, P.

    1975-01-01

    Engineering programming information is presented for the STARS-2P (shell theory automated for rotational structures-2P (plasticity)) digital computer program, and FORTRAN 4 was used in writing the various subroutines. The execution of this program requires the use of thirteen temporary storage units. The program was initially written and debugged on the IBM 370-165 computer and converted to the UNIVAC 1108 computer, where it utilizes approximately 60,000 words of core. Only basic FORTRAN library routines are required by the program: sine, cosine, absolute value, and square root.

  6. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  7. Guidelines for developing vectorizable computer programs

    NASA Technical Reports Server (NTRS)

    Miner, E. W.

    1982-01-01

    Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.

  8. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    NASA Astrophysics Data System (ADS)

    Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.

    1995-03-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.

  9. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, S.; Zacharia, T.; Baltas, N.

    1995-04-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less

  10. Digital computer technique for setup and checkout of an analog computer

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.

    1968-01-01

    Computer program technique, called Analog Computer Check-Out Routine Digitally /ACCORD/, generates complete setup and checkout data for an analog computer. In addition, the correctness of the analog program implementation is validated.

  11. Report on the FY17 Development of Computer Program for ASME Section III, Division 5, Subsection HB, Subpart B Rules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swindeman, M. J.; Jetter, R. I.; Sham, T. -L.

    One of the objectives of the high temperature design methodology activities is to develop and validate both improvements and the basic features of ASME Boiler and Pressure Vessel Code, Section III, Rules for Construction of Nuclear Facility Components, Division 5, High Temperature Reactors, Subsection HB, Subpart B (HBB). The overall scope of this task is to develop a computer program to aid assessment procedures of components under specified loading conditions in accordance with the elevated temperature design requirements for Division 5 Class A components. There are many features and alternative paths of varying complexity in HBB. The initial focus ofmore » this computer program is a basic path through the various options for a single reference material, 316H stainless steel. However, the computer program is being structured for eventual incorporation all of the features and permitted materials of HBB. This report will first provide a description of the overall computer program, particular challenges in developing numerical procedures for the assessment, and an overall approach to computer program development. This is followed by a more comprehensive appendix, which is the draft computer program manual for the program development. The strain limits rules have been implemented in the computer program. The evaluation of creep-fatigue damage will be implemented in future work scope.« less

  12. The falsification of Chiral Nuclear Forces

    NASA Astrophysics Data System (ADS)

    Ruiz Arriola, E.; Amaro, J. E.; Navarro Perez, R.

    2017-03-01

    Predictive power in theoretical nuclear physics has been a major concern in the study of nuclear structure and reactions. The Effective Field Theory (EFT) based on chiral expansions provides a model independent hierarchy for many body forces at long distances but their predictive power may be undermined by the regularization scheme dependence induced by the counterterms and encoding the short distances dynamics which seem to dominate the uncertainties. We analyze several examples including zero energy NN scattering or perturbative counterterm-free peripheral scattering where one would expect these methods to work best and unveil relevant systematic discrepancies when a fair comparison to the Granada-2013 NN-database and partial wave analysis (PWA) is undertaken. Work supported by Spanish Ministerio de Economia y Competitividad and European FEDER funds (grant FIS2014-59386-P), the Agencia de Innovacion y Desarrollo de Andalucia (grant No. FQM225), the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344, U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Award No. DE-SC0008511 (NUCLEI SciDAC Collaboration)

  13. A renormalization group approach to identifying the local quantum numbers in a many-body localized system

    NASA Astrophysics Data System (ADS)

    Pekker, David; Clark, Bryan K.; Oganesyan, Vadim; Refael, Gil; Tian, Binbin

    Many-body localization is a dynamical phase of matter that is characterized by the absence of thermalization. One of the key characteristics of many-body localized systems is the emergence of a large (possibly maximal) number of local integrals of motion (local quantum numbers) and corresponding conserved quantities. We formulate a robust algorithm for identifying these conserved quantities, based on Wegner's flow equations - a form of the renormalization group that works by disentangling the degrees of freedom of the system as opposed to integrating them out. We test our algorithm by explicit numerical comparison with more engineering based algorithms - Jacobi rotations and bi-partite matching. We find that the Wegner flow algorithm indeed produces the more local conserved quantities and is therefore more optimal. A preliminary analysis of the conserved quantities produced by the Wegner flow algorithm reveals the existence of at least two different localization lengthscales. Work was supported by AFOSR FA9550-10-1-0524 and FA9550-12-1-0057, the Kaufmann foundation, and SciDAC FG02-12ER46875.

  14. NIMROD Modeling of Sawtooth Modes Using Hot-Particle Closures

    NASA Astrophysics Data System (ADS)

    Kruger, Scott; Jenkins, T. G.; Held, E. D.; King, J. R.

    2015-11-01

    In DIII-D shot 96043, RF heating gives rise to an energetic ion population that alters the sawtooth stability boundary, replacing conventional sawtooth cycles by longer-period, larger-amplitude `giant sawtooth' oscillations. We explore the use of particle-in-cell closures within the NIMROD code to numerically represent the RF-induced hot-particle distribution, and investigate the role of this distribution in determining the altered mode onset threshold and subsequent nonlinear evolution. Equilibrium reconstructions from the experimental data are used to enable these detailed validation studies. Effects of other parameters on the sawtooth behavior, such as the plasma Lundquist number and hot-particle beta-fraction, are also considered. The fast energetic particles present many challenges for the PIC closure. We review new algorithm and performance improvements to address these challenges, and provide a preliminary assessment of the efficacy of the PIC closure versus a continuum model for energetic particle modeling. We also compare our results with those of, and discuss plans for a more complete validation campaign for this discharge. Supported by US Department of Energy via the SciDAC Center for Extended MHD Modeling (CEMM).

  15. 32 CFR 505.13 - Computer Matching Agreement Program.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Computer Matching Agreement Program. 505.13 Section 505.13 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS ARMY PRIVACY ACT PROGRAM § 505.13 Computer Matching Agreement Program. (a...

  16. Computers and Young Children. Storyboard Software: Flannel Boards in the Computer Age.

    ERIC Educational Resources Information Center

    Shade, Daniel D.

    1995-01-01

    Describes storyboard software as computer programs with which children can build a story using visuals. Notes the importance of such programs from preliterate or nonreading children. Describes a new storyboard program, "Wiggins in Storyland," and its features. Lists recommended storyboard software programs, with publishers and compatible…

  17. ROUTES: a computer program for preliminary route location.

    Treesearch

    S.E. Reutebuch

    1988-01-01

    An analytical description of the ROUTES computer program is presented. ROUTES is part of the integrated preliminary harvest- and transportation-planning software package, PLANS. The ROUTES computer program is useful where grade and sideslope limitations are important in determining routes for vehicular travel. With the program, planners can rapidly identify route...

  18. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  19. Middle School Teachers' Perceptions of Computer-Assisted Reading Intervention Programs

    ERIC Educational Resources Information Center

    Bippert, Kelli; Harmon, Janis

    2017-01-01

    Middle schools often turn to computer-assisted reading intervention programs to improve student reading. The questions guiding this study are (a) in what ways are computer-assisted reading intervention programs utilized, and (b) what are teachers' perceptions about these intervention programs? Nineteen secondary reading teachers were interviewed…

  20. Space shuttle environmental and thermal control life support system computer program

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A computer program for the design and operation of the space shuttle environmental and thermal control life support system is presented. The subjects discussed are: (1) basic optimization program, (2) off design performance, (3) radiator/evaporator expendable usage, (4) component weights, and (5) computer program operating procedures.

  1. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  2. Culvert analysis program for indirect measurement of discharge

    USGS Publications Warehouse

    Fulford, Janice M.; ,

    1993-01-01

    A program based on the U.S. Geological Survey (USGS) methods for indirectly computing peak discharges through culverts allows users to employ input data formats used by the water surface profile program (WSPRO). The program can be used to compute discharge rating surfaces or curves that describe the behavior of flow through a particular culvert or to compute discharges from measurements of upstream of the gradually varied flow equations and has been adapted slightly to provide solutions that minimize the need for the user to determine between different flow regimes. The program source is written in Fortran 77 and has been run on mini-computers and personal computers. The program does not use or require graphics capability, a color monitor, or a mouse.

  3. An Exploration of the Role of Visual Programming Tools in the Development of Young Children's Computational Thinking

    ERIC Educational Resources Information Center

    Rose, Simon P.; Habgood, M. P. Jacob; Jay, Tim

    2017-01-01

    Programming tools are being used in education to teach computer science to children as young as 5 years old. This research aims to explore young children's approaches to programming in two tools with contrasting programming interfaces, ScratchJr and Lightbot, and considers the impact of programming approaches on developing computational thinking.…

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.

  5. Evaluation of Rankine cycle air conditioning system hardware by computer simulation

    NASA Technical Reports Server (NTRS)

    Healey, H. M.; Clark, D.

    1978-01-01

    A computer program for simulating the performance of a variety of solar powered Rankine cycle air conditioning system components (RCACS) has been developed. The computer program models actual equipment by developing performance maps from manufacturers data and is capable of simulating off-design operation of the RCACS components. The program designed to be a subroutine of the Marshall Space Flight Center (MSFC) Solar Energy System Analysis Computer Program 'SOLRAD', is a complete package suitable for use by an occasional computer user in developing performance maps of heating, ventilation and air conditioning components.

  6. 75 FR 53004 - Privacy Act of 1974, as Amended; Notice of Computer-Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-30

    ... report of this computer-matching program with the Committee on Homeland Security and Governmental Affairs... INFORMATION: A. General The Computer-Matching and Privacy Protection Act of 1988, (Pub. L. 100-503), amended... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as Amended; Notice of Computer-Matching Program...

  7. Structural Optimization Methodology for Rotating Disks of Aircraft Engines

    NASA Technical Reports Server (NTRS)

    Armand, Sasan C.

    1995-01-01

    In support of the preliminary evaluation of various engine technologies, a methodology has been developed for structurally designing the rotating disks of an aircraft engine. The structural design methodology, along with a previously derived methodology for predicting low-cycle fatigue life, was implemented in a computer program. An interface computer program was also developed that gathers the required data from a flowpath analysis program (WATE) being used at NASA Lewis. The computer program developed for this study requires minimum interaction with the user, thus allowing engineers with varying backgrounds in aeropropulsion to successfully execute it. The stress analysis portion of the methodology and the computer program were verified by employing the finite element analysis method. The 10th- stage, high-pressure-compressor disk of the Energy Efficient Engine Program (E3) engine was used to verify the stress analysis; the differences between the stresses and displacements obtained from the computer program developed for this study and from the finite element analysis were all below 3 percent for the problem solved. The computer program developed for this study was employed to structurally optimize the rotating disks of the E3 high-pressure compressor. The rotating disks designed by the computer program in this study were approximately 26 percent lighter than calculated from the E3 drawings. The methodology is presented herein.

  8. DORCA 2 computer program. Volume 3: Program listing

    NASA Technical Reports Server (NTRS)

    Carey, J. B.

    1972-01-01

    A program listing for the Dynamic Operational Requirements and Cost Analysis Program is presented. Detailed instructions for the computer programming involved in space mission planning and project requirements are developed.

  9. A vectorization of the Hess McDonnell Douglas potential flow program NUED for the STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Boney, L. R.; Smith, R. E., Jr.

    1979-01-01

    The computer program NUED for analyzing potential flow about arbitrary three dimensional lifting bodies using the panel method was modified to use vector operations and run on the STAR-100 computer. A high speed of computation and ability to approximate the body surface with a large number of panels are characteristics of NUEDV. The new program shows that vector operations can be readily implemented in programs of this type to increase the computational speed on the STAR-100 computer. The virtual memory architecture of the STAR-100 facilitates the use of large numbers of panels to approximate the body surface.

  10. CHIRAL--A Computer Aided Application of the Cahn-Ingold-Prelog Rules.

    ERIC Educational Resources Information Center

    Meyer, Edgar F., Jr.

    1978-01-01

    A computer program is described for identification of chiral centers in molecules. Essential input to the program includes both atomic and bonding information. The program does not require computer graphic input-output. (BB)

  11. Software Maintenance of the Subway Environment Simulation Computer Program

    DOT National Transportation Integrated Search

    1980-12-01

    This document summarizes the software maintenance activities performed to support the Subway Environment Simulation (SES) Computer Program. The SES computer program is a design-oriented analytic tool developed during a recent five-year research proje...

  12. Translator program converts computer printout into braille language

    NASA Technical Reports Server (NTRS)

    Powell, R. A.

    1967-01-01

    Computer program converts print image tape files into six dot Braille cells, enabling a blind computer programmer to monitor and evaluate data generated by his own programs. The Braille output is printed 8 lines per inch.

  13. Improved programs for DNA and protein sequence analysis on the IBM personal computer and other standard computer systems.

    PubMed Central

    Mount, D W; Conrad, B

    1986-01-01

    We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780

  14. Computer program for design and performance analysis of navigation-aid power systems. Program documentation. Volume 1: Software requirements document

    NASA Technical Reports Server (NTRS)

    Goltz, G.; Kaiser, L. M.; Weiner, H.

    1977-01-01

    A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.

  15. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  16. Confronting the Issues of Programming in Information Systems Curricula: The Goal Is Success

    ERIC Educational Resources Information Center

    Babb, Jeffry; Longenecker, Herbert E., Jr.; Baugh, Jeanne; Feinstein, David

    2014-01-01

    Computer programming has been part of Information Systems (IS) curricula since the first model curriculum. It is with programming that computers are instructed how to implement our ideas into reality. Yet, over the last decade numbers of computing undergraduates have significantly declined in North American academic programs. In addition, high…

  17. Numerical nonlinear inelastic analysis of stiffened shells of revolution. Volume 4: Satellite-1P program for STARS-2P digital computer program

    NASA Technical Reports Server (NTRS)

    Svalbonas, V.; Ogilvie, P.

    1975-01-01

    A special data debugging package called SAT-1P created for the STARS-2P computer program is described. The program was written exclusively in FORTRAN 4 for the IBM 370-165 computer, and then converted to the UNIVAC 1108.

  18. A Computer-Aided Writing Program for Learning Disabled Adolescents.

    ERIC Educational Resources Information Center

    Fais, Laurie; Wanderman, Richard

    The paper describes the application of a computer-assisted writing program in a special high school for learning disabled and dyslexic students and reports on a study of the program's effectiveness. Particular advantages of the Macintosh Computer for such a program are identified including use of the mouse pointing tool, graphic icons to identify…

  19. 01010000 01001100 01000001 01011001: Play Elements in Computer Programming

    ERIC Educational Resources Information Center

    Breslin, Samantha

    2013-01-01

    This article explores the role of play in human interaction with computers in the context of computer programming. The author considers many facets of programming including the literary practice of coding, the abstract design of programs, and more mundane activities such as testing, debugging, and hacking. She discusses how these incorporate the…

  20. A Proposed Programming System for Knuth's Mix Computer.

    ERIC Educational Resources Information Center

    Akers, Max Neil

    A programing system using a hypothetical computer is proposed for use in teaching machine and assembly language programing courses. Major components such as monitor, assembler, interpreter, grader, and diagnostics are described. The interpreter is programed and documented for use on an IBM 360/67 computer. The interpreter can be used for teaching…

  1. Some Analogies between Computer Programming and the Composing Process.

    ERIC Educational Resources Information Center

    Skulicz, Matthew

    Since there are similarities between the process of writing computer programs and the process of writing successful expository prose, a student's knowledge of computer programing can contribute to the understanding of some principles of composition. The establishment of a clear objective is the first priority of both the writer and the programer,…

  2. Base Numeration Systems and Introduction to Computer Programming.

    ERIC Educational Resources Information Center

    Kim, K. Ed.; And Others

    This teaching guide is for the instructor of an introductory course in computer programming using FORTRAN language. Five FORTRAN programs are incorporated in this guide, which has been used as a FORTRAN IV SELF TEACHER. The base eight, base four, and base two concepts are integrated with FORTRAN computer programs, geoblock activities, and related…

  3. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Computer Program C Appendix C to Part... APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note: EPA will make copies of appendix C available from: Director, Stationary Source Compliance Division, EN...

  4. Personalized Computer-Assisted Mathematics Problem-Solving Program and Its Impact on Taiwanese Students

    ERIC Educational Resources Information Center

    Chen, Chiu-Jung; Liu, Pei-Lin

    2007-01-01

    This study evaluated the effects of a personalized computer-assisted mathematics problem-solving program on the performance and attitude of Taiwanese fourth grade students. The purpose of this study was to determine whether the personalized computer-assisted program improved student performance and attitude over the nonpersonalized program.…

  5. Design and Curriculum Considerations for a Computer Graphics Program in the Arts.

    ERIC Educational Resources Information Center

    Leeman, Ruedy W.

    This history and state-of-the-art review of computer graphics describes computer graphics programs and proposed programs at Sheridan College (Canada), the Rhode Island School of Design, the University of Oregon, Northern Illinois University, and Ohio State University. These programs are discussed in terms of their philosophy, curriculum, student…

  6. 37 CFR 201.40 - Exemption to prohibition against circumvention.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... security of the owner or operator of a computer, computer system, or computer network; and (ii) The... film and media studies students; (ii) Documentary filmmaking; (iii) Noncommercial videos. (2) Computer... lawfully obtained, with computer programs on the telephone handset. (3) Computer programs, in the form of...

  7. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  8. Program listing for the REEDM (Rocket Exhaust Effluent Diffusion Model) computer program

    NASA Technical Reports Server (NTRS)

    Bjorklund, J. R.; Dumbauld, R. K.; Cheney, C. S.; Geary, H. V.

    1982-01-01

    The program listing for the REEDM Computer Program is provided. A mathematical description of the atmospheric dispersion models, cloud-rise models, and other formulas used in the REEDM model; vehicle and source parameters, other pertinent physical properties of the rocket exhaust cloud and meteorological layering techniques; user's instructions for the REEDM computer program; and worked example problems are contained in NASA CR-3646.

  9. Controlling Laboratory Processes From A Personal Computer

    NASA Technical Reports Server (NTRS)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  10. User's Guide for Computer Program that Routes Signal Traces

    NASA Technical Reports Server (NTRS)

    Hedgley, David R., Jr.

    2000-01-01

    This disk contains both a FORTRAN computer program and the corresponding user's guide that facilitates both its incorporation into your system and its utility. The computer program represents an efficient algorithm that routes signal traces on layers of a printed circuit with both through-pins and surface mounts. The computer program included is an implementation of the ideas presented in the theoretical paper titled "A Formal Algorithm for Routing Signal Traces on a Printed Circuit Board", NASA TP-3639 published in 1996. The computer program in the "connects" file can be read with a FORTRAN compiler and readily integrated into software unique to each particular environment where it might be used.

  11. Energy consumption program: A computer model simulating energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.

    1978-01-01

    The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.

  12. 76 FR 12984 - Privacy Act of 1974; Notice of a Computer Matching Program Between HUD and the United States...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... of a Computer Matching Program Between HUD and the United States Department of Veterans Affairs (VA) AGENCY: Office of the Chief Information Officer, HUD. ACTION: Notice of a computer matching program... the Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), and the Office of...

  13. A Successful Bootstrap Program for Infusion of Computer Competencies into a School of Education Curriculum.

    ERIC Educational Resources Information Center

    Van Dusseldorp, Ralph

    1984-01-01

    Describes the successful, low-cost program for infusion of computer competencies into the curriculum of the School of Education at the University of Alaska, Anchorage, where all students are required to become computer competent prior to graduation. Computer competency goals for students in school's certification programs are outlined. (MBR)

  14. 78 FR 71591 - Privacy Act of 1974; Computer Matching Program between the U.S. Department of Education (ED) and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-29

    ... DEPARTMENT OF EDUCATION Privacy Act of 1974; Computer Matching Program between the U.S. Department.... ACTION: Notice. SUMMARY: Notice is hereby given of the renewal of the computer matching program between... (VA) (source agency). After the ED and VA Data Integrity Boards approve a new computer matching...

  15. 78 FR 29786 - Computer Matching and Privacy Protection Act of 1988; Report of Matching Program: RRB and State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-21

    ... RAILROAD RETIREMENT BOARD Computer Matching and Privacy Protection Act of 1988; Report of Matching...: Notice of a renewal of an existing computer matching program due to expire on May 24, 2013. SUMMARY: As... of its intent to renew an ongoing computer matching program. In this match, we provide certain...

  16. 78 FR 70971 - Privacy Act of 1974, as Amended; Notice of Computer Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ... will file a report of this computer-matching program with the Committee on Homeland Security and... . SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988, (Pub. L. 100-503... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as Amended; Notice of Computer Matching Program...

  17. 76 FR 50460 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-15

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD). ACTION: Notice of a Computer Matching Program. SUMMARY: Subsection (e)(12) of the Privacy Act of 1974, as amended, (5 U.S.C. 552a) requires agencies to publish advance notice of any proposed or revised computer...

  18. 77 FR 32085 - Privacy Act of 1974, as Amended; Renewal of Computer Matching Program Between the U.S. Department...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-31

    ... Matching and Privacy Protection Act of 1988 (Pub. L. 100-503) and the Computer Matching and Privacy... DEPARTMENT OF EDUCATION Privacy Act of 1974, as Amended; Renewal of Computer Matching Program.... ACTION: Notice. SUMMARY: This document provides notice of the renewal of the computer matching program...

  19. A Study of the Programming Languages Used in Information Systems and in Computer Science Curricula

    ERIC Educational Resources Information Center

    Russell, Jack; Russell, Barbara; Pollacia, Lissa F.; Tastle, William J.

    2010-01-01

    This paper researches the computer languages taught in the first, second and third programming courses in Computer Information Systems (CIS), Management Information Systems (MIS or IS) curricula as well as in Computer Science (CS) and Information Technology (IT) curricula. Instructors teaching the first course in programming within a four year…

  20. 76 FR 77811 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-14

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, Department of Defense (DoD). ACTION: Notice of a Computer Matching Program. SUMMARY: Subsection (e)(12) of the Privacy Act of 1974, as amended, (5 U.S.C. 552a) requires agencies to publish advance notice of any proposed or revised computer...

  1. Computer program for the IBM personal computer which searches for approximate matches to short oligonucleotide sequences in long target DNA sequences.

    PubMed Central

    Myers, E W; Mount, D W

    1986-01-01

    We describe a program which may be used to find approximate matches to a short predefined DNA sequence in a larger target DNA sequence. The program predicts the usefulness of specific DNA probes and sequencing primers and finds nearly identical sequences that might represent the same regulatory signal. The program is written in the C programming language and will run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The program has been integrated into an existing software package for the IBM personal computer (see article by Mount and Conrad, this volume). Some examples of its use are given. PMID:3753785

  2. 76 FR 41234 - Advanced Scientific Computing Advisory Committee Charter Renewal

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ... Secretariat, General Services Administration, notice is hereby given that the Advanced Scientific Computing... advice and recommendations concerning the Advanced Scientific Computing program in response only to... Advanced Scientific Computing Research program and recommendations based thereon; --Advice on the computing...

  3. Implementation of a computer database testing and analysis program.

    PubMed

    Rouse, Deborah P

    2007-01-01

    The author is the coordinator of a computer software database testing and analysis program implemented in an associate degree nursing program. Computer software database programs help support the testing development and analysis process. Critical thinking is measurable and promoted with their use. The reader of this article will learn what is involved in procuring and implementing a computer database testing and analysis program in an academic nursing program. The use of the computerized database for testing and analysis will be approached as a method to promote and evaluate the nursing student's critical thinking skills and to prepare the nursing student for the National Council Licensure Examination.

  4. Computer simulation of space station computer steered high gain antenna

    NASA Technical Reports Server (NTRS)

    Beach, S. W.

    1973-01-01

    The mathematical modeling and programming of a complete simulation program for a space station computer-steered high gain antenna are described. The program provides for reading input data cards, numerically integrating up to 50 first order differential equations, and monitoring up to 48 variables on printed output and on plots. The program system consists of a high gain antenna, an antenna gimbal control system, an on board computer, and the environment in which all are to operate.

  5. Shuttle user analysis (study 2.2): Volume 3. Business Risk And Value of Operations in space (BRAVO). Part 4: Computer programs and data look-up

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Computer program listings as well as graphical and tabulated data needed by the analyst to perform a BRAVO analysis were examined. Graphical aid which can be used to determine the earth coverage of satellites in synchronous equatorial orbits was described. A listing for satellite synthesis computer program as well as a sample printout for the DSCS-11 satellite program and a listing of the symbols used in the program were included. The APL language listing for the payload program cost estimating computer program was given. This language is compatible with many of the time sharing remote terminals computers used in the United States. Data on the intelsat communications network was studied. Costs for telecommunications systems leasing, line of sight microwave relay communications systems, submarine telephone cables, and terrestrial power generation systems were also described.

  6. 76 FR 49753 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-11

    ... Defense. DHA 14 System name: Computer/Electronics Accommodations Program for People with Disabilities... with ``Computer/Electronic Accommodations Program.'' System location: Delete entry and replace with ``Computer/Electronic Accommodations Program, Skyline 5, Suite 302, 5111 Leesburg Pike, Falls Church, VA...

  7. Computers and the Multiplicity of Polynomial Roots.

    ERIC Educational Resources Information Center

    Wavrik, John J.

    1982-01-01

    Described are stages in the development of a computer program to solve a particular algebra problem and the nature of algebraic computation is presented. A program in BASIC is provided to give ideas to others for developing their own programs. (MP)

  8. Computer Program To Transliterate Into Arabic

    NASA Technical Reports Server (NTRS)

    Stephan, E.

    1986-01-01

    Conceptual program for TRS-80, Model 12 (or equivalent) computer transliterates from English letters of computer keyboard to Arabic characters in output of associated printer. Program automatically changes character sequence from left-to-right of English to right-to-left of Arabic.

  9. Evolving binary classifiers through parallel computation of multiple fitness cases.

    PubMed

    Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni

    2005-06-01

    This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.

  10. SSME structural computer program development. Volume 2: BOPACE users manual

    NASA Technical Reports Server (NTRS)

    Vos, R. G.

    1973-01-01

    A computer program for use with a thermal-elastic-plastic-creep structural analyzer is presented. The following functions of the computer program are discussed: (1) analysis of very high temperature and large plastic-creep effects, (2) treatment of cyclic thermal and mechanical loads, (3) development of constitutive theory which closely follows actual behavior under variable temperature conditions, (4) stable numerical solution approach which avoids cumulative errors, and (5) capability of handling up to 1000 degrees of freedom. The computer program is written in FORTRAN IV and has been run on the IBM 360 and UNIVAC 1108 computer systems.

  11. An automated procedure for developing hybrid computer simulations of turbofan engines

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.

    1980-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all of the calculations and date manipulations needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self contained engine model to match specified design point information. A test case is described and comparisons between hybrid simulation and specified engine performance data are presented.

  12. Evolution of a standard microprocessor-based space computer

    NASA Technical Reports Server (NTRS)

    Fernandez, M.

    1980-01-01

    An existing in inventory computer hardware/software package (B-1 RFS/ECM) was repackaged and applied to multiple missile/space programs. Concurrent with the application efforts, low risk modifications were made to the computer from program to program to take advantage of newer, advanced technology and to meet increasingly more demanding requirements (computational and memory capabilities, longer life, and fault tolerant autonomy). It is concluded that microprocessors hold promise in a number of critical areas for future space computer applications. However, the benefits of the DoD VHSIC Program are required and the old proliferation problem must be revised.

  13. Parallel and serial computing tools for testing single-locus and epistatic SNP effects of quantitative traits in genome-wide association studies

    PubMed Central

    Ma, Li; Runesha, H Birali; Dvorkin, Daniel; Garbe, John R; Da, Yang

    2008-01-01

    Background Genome-wide association studies (GWAS) using single nucleotide polymorphism (SNP) markers provide opportunities to detect epistatic SNPs associated with quantitative traits and to detect the exact mode of an epistasis effect. Computational difficulty is the main bottleneck for epistasis testing in large scale GWAS. Results The EPISNPmpi and EPISNP computer programs were developed for testing single-locus and epistatic SNP effects on quantitative traits in GWAS, including tests of three single-locus effects for each SNP (SNP genotypic effect, additive and dominance effects) and five epistasis effects for each pair of SNPs (two-locus interaction, additive × additive, additive × dominance, dominance × additive, and dominance × dominance) based on the extended Kempthorne model. EPISNPmpi is the parallel computing program for epistasis testing in large scale GWAS and achieved excellent scalability for large scale analysis and portability for various parallel computing platforms. EPISNP is the serial computing program based on the EPISNPmpi code for epistasis testing in small scale GWAS using commonly available operating systems and computer hardware. Three serial computing utility programs were developed for graphical viewing of test results and epistasis networks, and for estimating CPU time and disk space requirements. Conclusion The EPISNPmpi parallel computing program provides an effective computing tool for epistasis testing in large scale GWAS, and the epiSNP serial computing programs are convenient tools for epistasis analysis in small scale GWAS using commonly available computer hardware. PMID:18644146

  14. Experiences with Efficient Methodologies for Teaching Computer Programming to Geoscientists

    ERIC Educational Resources Information Center

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-01-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students…

  15. Computer programs for calculating two-dimensional potential flow through deflected nozzles

    NASA Technical Reports Server (NTRS)

    Hawk, J. D.; Stockman, N. O.

    1979-01-01

    Computer programs to calculate the incompressible potential flow, corrected for compressibility, in two-dimensional nozzles at arbitrary operating conditions are presented. A statement of the problem to be solved, a description of each of the computer programs, and sufficient documentation, including a test case, to enable a user to run the program are included.

  16. Exploring Poetry through Interactive Computer Programs.

    ERIC Educational Resources Information Center

    Nimchinsky, Howard; Camp, Jocelyn

    The goal of a project was to design, test, and evaluate several computer programs that allow students in introductory literature and poetry courses to explore a poem in detail and, through a dialogue with the program, to develop their own interpretation of it. Computer programs were completed on poems by Robert Frost and W.H. Auden. Both programs…

  17. Pre-Service Teachers' Uses of and Barriers from Adopting Computer-Assisted Language Learning (CALL) Programs

    ERIC Educational Resources Information Center

    Samani, Ebrahim; Baki, Roselan; Razali, Abu Bakar

    2014-01-01

    Success in implementation of computer-assisted language learning (CALL) programs depends on the teachers' understanding of the roles of CALL programs in education. Consequently, it is also important to understand the barriers teachers face in the use of computer-assisted language learning (CALL) programs. The current study was conducted on 14…

  18. A Two-Tier Test-Based Approach to Improving Students' Computer-Programming Skills in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Yang, Tzu-Chi; Hwang, Gwo-Jen; Yang, Stephen J. H.; Hwang, Gwo-Haur

    2015-01-01

    Computer programming is an important skill for engineering and computer science students. However, teaching and learning programming concepts and skills has been recognized as a great challenge to both teachers and students. Therefore, the development of effective learning strategies and environments for programming courses has become an important…

  19. Influence matrix program for aerodynamic lifting surface theory. [in subsonic flows

    NASA Technical Reports Server (NTRS)

    Medan, R. T.; Ray, K. S.

    1973-01-01

    A users manual is described for a USA FORTRAN 4 computer program which computes an aerodynamic influence matrix and is one of several computer programs used to analyze lifting, thin wings in steady, subsonic flow according to a kernel function method lifting surface theory. The most significant features of the program are that it can treat unsymmetrical wings, control points can be placed on the leading and/or trailing edges, and a stable, efficient algorithm is used to compute the influence matrix.

  20. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  1. Operational procedure for computer program for design point characteristics of a gas generator or a turbojet lift engine for V/STOL applications

    NASA Technical Reports Server (NTRS)

    Krebs, R. P.

    1972-01-01

    The computer program described calculates the design-point characteristics of a gas generator or a turbojet lift engine for V/STOL applications. The program computes the dimensions and mass, as well as the thermodynamic performance of the model engine and its components. The program was written in FORTRAN 4 language. Provision has been made so that the program accepts input values in either SI Units or U.S. Customary Units. Each engine design-point calculation requires less than 0.5 second of 7094 computer time.

  2. a Recursive Approach to Compute Normal Forms

    NASA Astrophysics Data System (ADS)

    HSU, L.; MIN, L. J.; FAVRETTO, L.

    2001-06-01

    Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.

  3. When technology became language: the origins of the linguistic conception of computer programming, 1950-1960.

    PubMed

    Nofre, David; Priestley, Mark; Alberts, Gerard

    2014-01-01

    Language is one of the central metaphors around which the discipline of computer science has been built. The language metaphor entered modern computing as part of a cybernetic discourse, but during the second half of the 1950s acquired a more abstract meaning, closely related to the formal languages of logic and linguistics. The article argues that this transformation was related to the appearance of the commercial computer in the mid-1950s. Managers of computing installations and specialists on computer programming in academic computer centers, confronted with an increasing variety of machines, called for the creation of "common" or "universal languages" to enable the migration of computer code from machine to machine. Finally, the article shows how the idea of a universal language was a decisive step in the emergence of programming languages, in the recognition of computer programming as a proper field of knowledge, and eventually in the way we think of the computer.

  4. Computer Integrated Manufacturing. Florida Vocational Program Guide.

    ERIC Educational Resources Information Center

    University of South Florida, Tampa. Dept. of Adult and Vocational Education.

    This packet contains a program guide and Career Merit Achievement Plan (Career MAP) for the implementation of a computer-integrated manufacturing program in Florida secondary and postsecondary schools. The program guide describes the program content and structure, provides a program description, lists job titles under the program, and includes a…

  5. Computer Engineering Technology. Florida Vocational Program Guide.

    ERIC Educational Resources Information Center

    University of South Florida, Tampa. Dept. of Adult and Vocational Education.

    This packet contains a program guide and Career Merit Achievement Plan (Career MAP) for the implementation of a computer engineering technology program in Florida secondary and postsecondary schools. The program guide describes the program content and structure, provides a program description, lists job titles under the program, and includes a…

  6. Computer-aided programming for message-passing system; Problems and a solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, M.Y.; Gajski, D.D.

    1989-12-01

    As the number of processors and the complexity of problems to be solved increase, programming multiprocessing systems becomes more difficult and error-prone. Program development tools are necessary since programmers are not able to develop complex parallel programs efficiently. Parallel models of computation, parallelization problems, and tools for computer-aided programming (CAP) are discussed. As an example, a CAP tool that performs scheduling and inserts communication primitives automatically is described. It also generates the performance estimates and other program quality measures to help programmers in improving their algorithms and programs.

  7. Thermochemical cycle analysis using linked CECS72 and HYDRGN computer programs

    NASA Technical Reports Server (NTRS)

    Donovan, L. F.

    1977-01-01

    A combined thermochemical cycle analysis computer program was designed. Input to the combined program is the same as input to the thermochemical cycle analysis program except that the extent of the reactions need not be specified. The combined program is designed to be run interactively from a computer time-sharing terminal. This mode of operation allows correction or modification of the cycle to take place during cycle analysis. A group of 13 thermochemical cycles was used to test the combined program.

  8. Optical Design Using Small Dedicated Computers

    NASA Astrophysics Data System (ADS)

    Sinclair, Douglas C.

    1980-09-01

    Since the time of the 1975 International Lens Design Conference, we have developed a series of optical design programs for Hewlett-Packard desktop computers. The latest programs in the series, OSLO-25G and OSLO-45G, have most of the capabilities of general-purpose optical design programs, including optimization based on exact ray-trace data. The computational techniques used in the programs are similar to ones used in other programs, but the creative environment experienced by a designer working directly with these small dedicated systems is typically much different from that obtained with shared-computer systems. Some of the differences are due to the psychological factors associated with using a system having zero running cost, while others are due to the design of the program, which emphasizes graphical output and ease of use, as opposed to computational speed.

  9. A computer program for calculation of approximate embryo/fetus radiation dose in nuclear medicine applications.

    PubMed

    Bayram, Tuncay; Sönmez, Bircan

    2012-04-01

    In this study, we aimed to make a computer program that calculates approximate radiation dose received by embryo/fetus in nuclear medicine applications. Radiation dose values per MBq-1 received by embryo/fetus in nuclear medicine applications were gathered from literature for various stages of pregnancy. These values were embedded in the computer code, which was written in Fortran 90 program language. The computer program called nmfdose covers almost all radiopharmaceuticals used in nuclear medicine applications. Approximate radiation dose received by embryo/fetus can be calculated easily at a few steps using this computer program. Although there are some constraints on using the program for some special cases, nmfdose is useful and it provides practical solution for calculation of approximate dose to embryo/fetus in nuclear medicine applications. None declared.

  10. Gender and stereotypes in motivation to study computer programming for careers in multimedia

    NASA Astrophysics Data System (ADS)

    Doubé, Wendy; Lang, Catherine

    2012-03-01

    A multimedia university programme with relatively equal numbers of male and female students in elective programming subjects provided a rare opportunity to investigate female motivation to study and pursue computer programming in a career. The MSLQ was used to survey 85 participants. In common with research into deterrence of females from STEM domains, females displayed significantly lower self-efficacy and expectancy for success. In contrast to research into deterrence of females from STEM domains, both genders placed similar high values on computer programming and shared high extrinsic and intrinsic goal orientation. The authors propose that the stereotype associated with a creative multimedia career could attract female participation in computer programming whereas the stereotype associated with computer science could be a deterrent.

  11. A review of small canned computer programs for survey research and demographic analysis.

    PubMed

    Sinquefield, J C

    1976-12-01

    A variety of small canned computer programs for survey research and demographic analysis appropriate for use in developing countries are reviewed in this article. The programs discussed are SPSS (Statistical Package for the Social Sciences); CENTS, CO-CENTS, CENTS-AID, CENTS-AIE II; MINI-TAB EDIT, FREQUENCIES, TABLES, REGRESSION, CLIENT RECORD, DATES, MULT, LIFE, and PREGNANCY HISTORY; FIVFIV and SINSIN; DCL (Demographic Computer Library); MINI-TAB Population Projection, Functional Population Projection, and Family Planning Target Projection. A description and evaluation for each program of uses, instruction manuals, computer requirements, and procedures for obtaining manuals and programs are provided. Such information is intended to facilitate and encourage the use of the computer by data processors in developing countries.

  12. A DNA sequence analysis package for the IBM personal computer.

    PubMed Central

    Lagrimini, L M; Brentano, S T; Donelson, J E

    1984-01-01

    We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433

  13. Catalog of Computer Programs Used in Undergraduate Geological Education.

    ERIC Educational Resources Information Center

    Burger, H. Robert

    1983-01-01

    Provides list of mineralogy, petrology, and geochemistry computer programs. Each entry includes a brief description, program name and language, availability of program listing, and source and/or reference. (JN)

  14. Exploring Pair Programming Benefits for MIS Majors

    ERIC Educational Resources Information Center

    Dongo, Tendai; Reed, April H.; O'Hara, Margaret

    2016-01-01

    Pair programming is a collaborative programming practice that places participants in dyads, working in tandem at one computer to complete programming assignments. Pair programming studies with Computer Science (CS) and Software Engineering (SE) majors have identified benefits such as technical productivity, program/design quality, academic…

  15. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  16. Survey: Computer Usage in Design Courses.

    ERIC Educational Resources Information Center

    Henley, Ernest J.

    1983-01-01

    Presents results of a survey of chemical engineering departments regarding computer usage in senior design courses. Results are categorized according to: computer usage (use of process simulators, student-written programs, faculty-written or "canned" programs; costs (hard and soft money); and available software. Programs offered are…

  17. Computer-Based Educational Software System. Final Report.

    ERIC Educational Resources Information Center

    Brandt, Richard C.; Davis, Bradley N.

    CBESS (Computer-Based Educational Software System) is a set of 22 programs addressing authoring, instructional delivery, and instructional management. The programs are divided into five groups: (1) Computer-Based Memorization System (CBMS), which helps students acquire and maintain declarative (factual) knowledge (11 programs); (2) Language Skills…

  18. Frequency Domain Computer Programs for Prediction and Analysis of Rail Vehicle Dynamics : Volume 1. Technical Report

    DOT National Transportation Integrated Search

    1975-12-01

    Frequency domain computer programs developed or acquired by TSC for the analysis of rail vehicle dynamics are described in two volumes. Volume I defines the general analytical capabilities required for computer programs applicable to single rail vehi...

  19. 78 FR 15734 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary [Docket No. DHS-2013-0010] Privacy Act of 1974; Computer Matching Program AGENCY: Department of Homeland Security/U.S. Citizenship and... computer matching program between the Department of Homeland Security/U.S. Citizenship and Immigration...

  20. 78 FR 15733 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary [Docket No. DHS-2013-0008] Privacy Act of 1974; Computer Matching Program AGENCY: Department of Homeland Security/U.S. Citizenship and... computer matching program between the Department of Homeland Security/U.S. Citizenship and Immigration...

  1. 78 FR 15731 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary [Docket No. DHS-2013-0011] Privacy Act of 1974; Computer Matching Program AGENCY: Department of Homeland Security/U.S. Citizenship and Immigration Services. ACTION: Notice. Overview Information: Privacy Act of 1974; Computer Matching Program...

  2. 78 FR 15732 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary [Docket No. DHS-2013-0007] Privacy Act of 1974; Computer Matching Program AGENCY: Department of Homeland Security/U.S. Citizenship and Immigration Services. ACTION: Notice. Overview Information: Privacy Act of 1974; Computer Matching Program...

  3. 78 FR 32711 - Privacy Act of 1974: Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-31

    ... DEPARTMENT OF VETERANS AFFAIRS Privacy Act of 1974: Computer Matching Program AGENCY: Department of Veterans Affairs. ACTION: Notice. SUMMARY: The Department of Veterans Affairs (VA) provides notice that it intends to conduct a recurring computer-matching program matching Internal Revenue Service (IRS...

  4. Computer Integrated Manufacturing Programs in Higher Education.

    ERIC Educational Resources Information Center

    International Business Machines Corp., Milford, CT. Academic Information Systems.

    This publication focuses on computer integrated manufacturing (CIM) programs at several higher education institutions which teach the use of computing in manufacturing. The document describes programs at the following institutions: University of Alabama (where researchers are investigating CIM techniques with a key focus on transferring their…

  5. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less

  6. The Effect of In-Service Training of Computer Science Teachers on Scratch Programming Language Skills Using an Electronic Learning Platform on Programming Skills and the Attitudes towards Teaching Programming

    ERIC Educational Resources Information Center

    Alkaria, Ahmed; Alhassan, Riyadh

    2017-01-01

    This study was conducted to examine the effect of in-service training of computer science teachers in Scratch language using an electronic learning platform on acquiring programming skills and attitudes towards teaching programming. The sample of this study consisted of 40 middle school computer science teachers. They were assigned into two…

  7. Computer programs to assist in high resolution thermal denaturation and circular dichroism studies on nucleic acids

    PubMed Central

    Goodman, Thomas C.; Hardies, Stephen C.; Cortez, Carlos; Hillen, Wolfgang

    1981-01-01

    Computer programs are described that direct the collection, processing, and graphical display of numerical data obtained from high resolution thermal denaturation (1-3) and circular dichroism (4) studies. Besides these specific applications, the programs may also be useful, either directly or as programming models, in other types of spectrophotometric studies employing computers, programming languages, or instruments similar to those described here (see Materials and Methods). PMID:7335498

  8. Method of fan sound mode structure determination computer program user's manual: Microphone location program

    NASA Technical Reports Server (NTRS)

    Pickett, G. F.; Wells, R. A.; Love, R. A.

    1977-01-01

    A computer user's manual describing the operation and the essential features of the microphone location program is presented. The Microphone Location Program determines microphone locations that ensure accurate and stable results from the equation system used to calculate modal structures. As part of the computational procedure for the Microphone Location Program, a first-order measure of the stability of the equation system was indicated by a matrix 'conditioning' number.

  9. Using the TouchMath Program to Teach Mathematical Computation to At-Risk Students and Students with Disabilities

    ERIC Educational Resources Information Center

    Ellingsen, Ryleigh; Clinton, Elias

    2017-01-01

    This manuscript reviews the empirical literature of the TouchMath© instructional program. The TouchMath© program is a commercial mathematics series that uses a dot notation system to provide multisensory instruction of computation skills. Using the program, students are taught to solve computational tasks in a multisensory manner that does not…

  10. Case Study: Creation of a Degree Program in Computer Security. White Paper.

    ERIC Educational Resources Information Center

    Belon, Barbara; Wright, Marie

    This paper reports on research into the field of computer security, and undergraduate degrees offered in that field. Research described in the paper reveals only one computer security program at the associate's degree level in the entire country. That program, at Texas State Technical College in Waco, is a 71-credit-hour program leading to an…

  11. Principals and Computers: Getting Started Together. Special Report: Computers in the Schools.

    ERIC Educational Resources Information Center

    Holland, Lori; Rude-Parkins, Carolyn

    1986-01-01

    Outlines five lessons learned at Roosevelt-Perry Elementary School (Kentucky) when the computer education program, Humana Computer Tutor project, was implemented. The principal was important to the success of the program. (MD)

  12. 75 FR 39575 - Privacy Act of 1974; Notice of a Computer Matching Program Between the Department of Housing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-09

    ... of a Computer Matching Program Between the Department of Housing and Urban Development (HUD) and the.... ACTION: Notice of a computer matching program between the HUD and the USDA. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 552a), as amended by the Computer Matching and Privacy Protection...

  13. 76 FR 39119 - Privacy Act of 1974; Notice of a Computer Matching Program Between the Department of Housing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-05

    ... of a Computer Matching Program Between the Department of Housing and Urban Development (HUD) and the...: Notice of a computer matching program between the HUD and ED. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 552a), as amended by the Computer Matching and Privacy Protection Act of 1988 (Pub...

  14. 75 FR 67755 - Privacy Act of 1974; Notice of a Computer Matching Program Between the Department of Housing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-03

    ... of a Computer Matching Program Between the Department of Housing and Urban Development (HUD) and the.... ACTION: Notice of a computer matching program between the HUD and the SBA. SUMMARY: In accordance with the Privacy Act of 1974 (5 U.S.C. 552a), as amended by the Computer Matching and Privacy Protection...

  15. Space Ultrareliable Modular Computer (SUMC) instruction simulator

    NASA Technical Reports Server (NTRS)

    Curran, R. T.

    1972-01-01

    The design principles, description, functional operation, and recommended expansion and enhancements are presented for the Space Ultrareliable Modular Computer interpretive simulator. Included as appendices are the user's manual, program module descriptions, target instruction descriptions, simulator source program listing, and a sample program printout. In discussing the design and operation of the simulator, the key problems involving host computer independence and target computer architectural scope are brought into focus.

  16. Computer Science Research Funding: How Much Is Too Little?

    DTIC Science & Technology

    2009-06-01

    Bioinformatics Parallel computing Computational biology Principles of programming Computational neuroscience Real-time and embedded systems Scientific...National Security Agency ( NSA ) • Missile Defense Agency (MDA) and others The various research programs have been coordinated through the DDR&E...DOD funding included only DARPA and OSD programs. FY07 and FY08 PBR funding included DARPA, NSA , some of the Services’ basic and applied research

  17. Computer programs simplify optical system analysis

    NASA Technical Reports Server (NTRS)

    1965-01-01

    The optical ray-trace computer program performs geometrical ray tracing. The energy-trace program calculates the relative monochromatic flux density on a specific target area. This program uses the ray-trace program as a subroutine to generate a representation of the optical system.

  18. Enhancing Instruction through Technology.

    ERIC Educational Resources Information Center

    Greenleaf, Connie; Gee, Mary Kay

    Following an introductory section that provides a rationale for using computers in workplace literacy classes, this guide reviews six computer programs and provides activities that teachers can use with the programs in teaching workplace literacy classes. The six computer programs reviewed are as follows: "Grammar Games,""Spell It 3,""The Way…

  19. Computer-Generated Phase Diagrams for Binary Mixtures.

    ERIC Educational Resources Information Center

    Jolls, Kenneth R.; And Others

    1983-01-01

    Computer programs that generate projections of thermodynamic phase surfaces through computer graphics were used to produce diagrams representing properties of water and steam and the pressure-volume-temperature behavior of most of the common equations of state. The program, program options emphasizing thermodynamic features of interest, and…

  20. 78 FR 38724 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-27

    ... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary [Docket No. DHS-2013-0006] Privacy Act of 1974; Computer Matching Program AGENCY: Department of Homeland Security/U.S. Citizenship and... Agreement that establishes a computer matching program between the Department of Homeland Security/U.S...

  1. Undergraduate computational physics projects on quantum computing

    NASA Astrophysics Data System (ADS)

    Candela, D.

    2015-08-01

    Computational projects on quantum computing suitable for students in a junior-level quantum mechanics course are described. In these projects students write their own programs to simulate quantum computers. Knowledge is assumed of introductory quantum mechanics through the properties of spin 1/2. Initial, more easily programmed projects treat the basics of quantum computation, quantum gates, and Grover's quantum search algorithm. These are followed by more advanced projects to increase the number of qubits and implement Shor's quantum factoring algorithm. The projects can be run on a typical laptop or desktop computer, using most programming languages. Supplementing resources available elsewhere, the projects are presented here in a self-contained format especially suitable for a short computational module for physics students.

  2. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  3. Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Ogilvie, P. L.

    1978-01-01

    The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.

  4. Student Achievement in Computer Programming: Lecture vs Computer-Aided Instruction

    ERIC Educational Resources Information Center

    Tsai, San-Yun W.; Pohl, Norval F.

    1978-01-01

    This paper discusses a study of the differences in student learning achievement, as measured by four different types of common performance evaluation techniques, in a college-level computer programming course under three teaching/learning environments: lecture, computer-aided instruction, and lecture supplemented with computer-aided instruction.…

  5. Using the Computer in Special Vocational Programs. Inservice Activities.

    ERIC Educational Resources Information Center

    Lane, Kenneth; Ward, Raymond

    This inservice manual is intended to assist vocational education teachers in using the techniques of computer-assisted instruction in special vocational education programs. Addressed in the individual units are the following topics: the basic principles of computer-assisted instruction (TRS-80 computers and typing on a computer keyboard); money…

  6. 32 CFR 806b.50 - Computer matching.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Computer matching. 806b.50 Section 806b.50... PROGRAM Disclosing Records to Third Parties § 806b.50 Computer matching. Computer matching programs... on forms used in applying for benefits. Coordinate computer matching statements on forms with Air...

  7. 32 CFR 806b.50 - Computer matching.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Computer matching. 806b.50 Section 806b.50... PROGRAM Disclosing Records to Third Parties § 806b.50 Computer matching. Computer matching programs... on forms used in applying for benefits. Coordinate computer matching statements on forms with Air...

  8. 32 CFR 806b.50 - Computer matching.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Computer matching. 806b.50 Section 806b.50... PROGRAM Disclosing Records to Third Parties § 806b.50 Computer matching. Computer matching programs... on forms used in applying for benefits. Coordinate computer matching statements on forms with Air...

  9. 32 CFR 806b.50 - Computer matching.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Computer matching. 806b.50 Section 806b.50... PROGRAM Disclosing Records to Third Parties § 806b.50 Computer matching. Computer matching programs... on forms used in applying for benefits. Coordinate computer matching statements on forms with Air...

  10. 32 CFR 806b.50 - Computer matching.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Computer matching. 806b.50 Section 806b.50... PROGRAM Disclosing Records to Third Parties § 806b.50 Computer matching. Computer matching programs... on forms used in applying for benefits. Coordinate computer matching statements on forms with Air...

  11. TCP/IP Interface for the Satellite Orbit Analysis Program (SOAP)

    NASA Technical Reports Server (NTRS)

    Carnright, Robert; Stodden, David; Coggi, John

    2009-01-01

    The Transmission Control Protocol/ Internet protocol (TCP/IP) interface for the Satellite Orbit Analysis Program (SOAP) provides the means for the software to establish real-time interfaces with other software. Such interfaces can operate between two programs, either on the same computer or on different computers joined by a network. The SOAP TCP/IP module employs a client/server interface where SOAP is the server and other applications can be clients. Real-time interfaces between software offer a number of advantages over embedding all of the common functionality within a single program. One advantage is that they allow each program to divide the computation labor between processors or computers running the separate applications. Secondly, each program can be allowed to provide its own expertise domain with other programs able to use this expertise.

  12. Bladed-shrouded-disc aeroelastic analyses: Computer program updates in NASTRAN level 17.7

    NASA Technical Reports Server (NTRS)

    Gallo, A. M.; Elchuri, V.; Skalski, S. C.

    1981-01-01

    In October 1979, a computer program based on the state-of-the-art compressor and structural technologies applied to bladed-shrouded-disc was developed. The program was more operational in NASTRAN Level 16. The bladed disc computer program was updated for operation in NASTRAN Level 17.7. The supersonic cascade unsteady aerodynamics routine UCAS, delivered as part of the NASTRAN Level 16 program was recorded to improve its execution time. These improvements are presented.

  13. On the writing of programming systems for spacecraft computers.

    NASA Technical Reports Server (NTRS)

    Mathur, F. P.; Rohr, J. A.

    1972-01-01

    Consideration of the systems designed to generate programs for the increasingly complex digital computers being used on board unmanned deep-space probes. Such programming systems must accommodate the special-purpose features incorporated in the hardware. The use of higher-level language facilities in the programming system can significantly simplify the task. Computers for Mariner and for the Outer Planets Grand Tour are briefly described, as well as their programming systems. Aspects of the higher level languages are considered.

  14. AQMAN; linear and quadratic programming matrix generator using two-dimensional ground-water flow simulation for aquifer management modeling

    USGS Publications Warehouse

    Lefkoff, L.J.; Gorelick, S.M.

    1987-01-01

    A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)

  15. Computer Programs (Turbomachinery)

    NASA Technical Reports Server (NTRS)

    1978-01-01

    NASA computer programs are extensively used in design of industrial equipment. Available from the Computer Software Management and Information Center (COSMIC) at the University of Georgia, these programs are employed as analysis tools in design, test and development processes, providing savings in time and money. For example, two NASA computer programs are used daily in the design of turbomachinery by Delaval Turbine Division, Trenton, New Jersey. The company uses the NASA splint interpolation routine for analysis of turbine blade vibration and the performance of compressors and condensers. A second program, the NASA print plot routine, analyzes turbine rotor response and produces graphs for project reports. The photos show examples of Delaval test operations in which the computer programs play a part. In the large photo below, a 24-inch turbine blade is undergoing test; in the smaller photo, a steam turbine rotor is being prepared for stress measurements under actual operating conditions; the "spaghetti" is wiring for test instrumentation

  16. Computer aided reliability, availability, and safety modeling for fault-tolerant computer systems with commentary on the HARP program

    NASA Technical Reports Server (NTRS)

    Shooman, Martin L.

    1991-01-01

    Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.

  17. The Historian as Computer Programmer.

    ERIC Educational Resources Information Center

    Campion, Martin C.

    1988-01-01

    Discusses two type of computer programs--grade book programs and classroom simulations--and how they are used by teachers. Using instructor-developed programs as examples, Campion describes his experiences as a programer and encourages his fellow historians to investigate the possibility of programing for themselves. (GEA)

  18. Liability for Personal Injury Caused by Defective Medical Computer Programs

    PubMed Central

    Brannigan, Vincent M.

    1980-01-01

    Defective medical computer programs can cause personal injury. Financial responsibility for the injury under tort law will turn on several factors: whether the program is a product or a service, what types of defect exist in the product, and who produced the program. The factors involved in making these decisions are complex, but knowledge of the relevant issues can assist computer personnel in avoiding liability.

  19. A Computer Based Education (CBE) Program for Middle School Mathematics Intervention

    ERIC Educational Resources Information Center

    Gulley, Bill

    2009-01-01

    A Computer Based Education (CBE) program for intervention mathematics was developed, used, and modified over a period of three years in a computer lab at an Arizona Title I middle school. The program is described along with a rationale for the need, design, and use of such a program. Data was collected in the third year and results of the program…

  20. Preschool Cookbook of Computer Programming Topics

    ERIC Educational Resources Information Center

    Morgado, Leonel; Cruz, Maria; Kahn, Ken

    2010-01-01

    A common problem in computer programming use for education in general, not simply as a technical skill, is that children and teachers find themselves constrained by what is possible through limited expertise in computer programming techniques. This is particularly noticeable at the preliterate level, where constructs tend to be limited to…

  1. 77 FR 5017 - Agency Information Collection Activities; Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-01

    ... 30 percent for computer programming, 20 percent for attorney services, 30 percent for skilled... workers, 25 percent for computer programming, and 20 percent for management time. (d) Dispute resolution...), or a total cost of $421,200. \\12\\ The blended rate is 40 percent for computer programming, 10 percent...

  2. Student Performance in Computer-Assisted Instruction in Programming.

    ERIC Educational Resources Information Center

    Friend, Jamesine E.; And Others

    A computer-assisted instructional system to teach college students the computer language, AID (Algebraic Interpretive Dialogue), two control programs, and data collected by the two control programs are described. It was found that although first response errors were often those of AID syntax, such errors were easily corrected. Secondly, while…

  3. Using Problem Solving to Teach a Programming Language.

    ERIC Educational Resources Information Center

    Milbrandt, George

    1995-01-01

    Computer studies courses should incorporate as many computer concepts and programming language experiences as possible. A gradual increase in problem difficulty will help the student to understand various computer concepts, and the programming language's syntax and structure. A sidebar provides two examples of how to establish a learning…

  4. Case Studies of Liberal Arts Computer Science Programs

    ERIC Educational Resources Information Center

    Baldwin, D.; Brady, A.; Danyluk, A.; Adams, J.; Lawrence, A.

    2010-01-01

    Many undergraduate liberal arts institutions offer computer science majors. This article illustrates how quality computer science programs can be realized in a wide variety of liberal arts settings by describing and contrasting the actual programs at five liberal arts colleges: Williams College, Kalamazoo College, the State University of New York…

  5. Assessment of Examinations in Computer Science Doctoral Education

    ERIC Educational Resources Information Center

    Straub, Jeremy

    2014-01-01

    This article surveys the examination requirements for attaining degree candidate (candidacy) status in computer science doctoral programs at all of the computer science doctoral granting institutions in the United States. It presents a framework for program examination requirement categorization, and categorizes these programs by the type or types…

  6. The Outlook for Computer Professions: 1985 Rewrites the Program.

    ERIC Educational Resources Information Center

    Drake, Larry

    1986-01-01

    The author states that graduates of junior college programs who learn COBOL will continue to find jobs, but employers will increasingly seek college graduates when filling positions for computer programers and systems analysts. Areas of growth for computer applications (services, military, data communications, and artificial intelligence) are…

  7. BASIC Language Flow Charting Program (BASCHART). Technical Note 3-82.

    ERIC Educational Resources Information Center

    Johnson, Charles C.; And Others

    This document describes BASCHART, a computer aid designed to decipher and automatically flow chart computer program logic; it also provides the computer code necessary for this process. Developed to reduce the labor intensive manual process of producing a flow chart for an undocumented or inadequately documented program, BASCHART will…

  8. Computer Programming Goes Back to School

    ERIC Educational Resources Information Center

    Kafai, Yasmin B.; Burke, Quinn

    2013-01-01

    We are witnessing a remarkable comeback of programming. Current initiatives to promote computational thinking and to broaden participation in computing signal a renewed interest to bring programming back into K-12 schools and help develop children as producers and not simply consumers of digital media. This essay explores the re-emergence of…

  9. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  10. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  11. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  12. 78 FR 3474 - Privacy Act of 1974; Computer Matching Program Between the Office Of Personnel Management and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-16

    ... Security benefit information to OPM via direct computer link for the administration of certain programs by... OFFICE OF PERSONNEL MANAGEMENT Privacy Act of 1974; Computer Matching Program Between the Office Of Personnel Management and Social Security Administration AGENCY: Office of Personnel Management...

  13. Correlates of Success in Introductory Programming: A Study with Middle School Students

    ERIC Educational Resources Information Center

    Qian, Yizhou; Lehman, James D.

    2016-01-01

    The demand for computing professionals in the workplace has led to increased attention to computer science education, and introductory computer science courses have been introduced at different levels of education. This study investigated the relationship between gender, academic performance in non-programming subjects, and programming learning…

  14. Computer Rehabilitation Training for the Severely Disabled.

    ERIC Educational Resources Information Center

    Louisiana State Univ., Baton Rouge.

    The Computer Rehabilitation Training Program for the Severely Disabled is a job-oriented training program to prepare physically handicapped persons to become computer programmers and analysts. The program is operated by: a nonprofit organization of Baton Rouge-area business people interested in data processing; the Department of Social Services,…

  15. Learning Motivation in E-Learning Facilitated Computer Programming Courses

    ERIC Educational Resources Information Center

    Law, Kris M. Y.; Lee, Victor C. S.; Yu, Y. T.

    2010-01-01

    Computer programming skills constitute one of the core competencies that graduates from many disciplines, such as engineering and computer science, are expected to possess. Developing good programming skills typically requires students to do a lot of practice, which cannot sustain unless they are adequately motivated. This paper reports a…

  16. CAPSAS: Computer Assisted Program for the Selection of Appropriate Statistics.

    ERIC Educational Resources Information Center

    Shermis, Mark D.; Albert, Susan L.

    A computer-assisted program has been developed for the selection of statistics or statistical techniques by both students and researchers. Based on Andrews, Klem, Davidson, O'Malley and Rodgers "A Guide for Selecting Statistical Techniques for Analyzing Social Science Data," this FORTRAN-compiled interactive computer program was…

  17. Computer program for assessing the theoretical performance of a three dimensional inlet

    NASA Technical Reports Server (NTRS)

    Agnone, A. M.; Kung, F.

    1972-01-01

    A computer program for determining the theoretical performance of a three dimensional inlet is presented. An analysis for determining the capture area, ram force, spillage force, and surface pressure force is presented, along with the necessary computer program. A sample calculation is also included.

  18. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  19. Atmospheric transmission computer program CP

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Barnett, T. L.; Korb, C. L.; Hanby, W.; Dillinger, A. E.

    1974-01-01

    A computer program is described which allows for calculation of the effects of carbon dioxide, water vapor, methane, ozone, carbon monoxide, and nitrous oxide on earth resources remote sensing techniques. A flow chart of the program and operating instructions are provided. Comparisons are made between the atmospheric transmission obtained from laboratory and spacecraft spectrometer data and that obtained from a computer prediction using a model atmosphere and radiosonde data. Limitations of the model atmosphere are discussed. The computer program listings, input card formats, and sample runs for both radiosonde data and laboratory data are included.

  20. Additional development of the XTRAN3S computer program

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.

  1. A data reduction technique and associated computer program for obtaining vehicle attitudes with a single onboard camera

    NASA Technical Reports Server (NTRS)

    Bendura, R. J.; Renfroe, P. G.

    1974-01-01

    A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.

  2. An IBM 370 assembly language program verifier

    NASA Technical Reports Server (NTRS)

    Maurer, W. D.

    1977-01-01

    The paper describes a program written in SNOBOL which verifies the correctness of programs written in assembly language for the IBM 360 and 370 series of computers. The motivation for using assembly language as a source language for a program verifier was the realization that many errors in programs are caused by misunderstanding or ignorance of the characteristics of specific computers. The proof of correctness of a program written in assembly language must take these characteristics into account. The program has been compiled and is currently running at the Center for Academic and Administrative Computing of The George Washington University.

  3. Simulating smokers' acceptance of modifications in a cessation program.

    PubMed Central

    Spoth, R

    1992-01-01

    Recent research has underscored the importance of assessing barriers to smokers' acceptance of cessation programs. This paper illustrates the use of computer simulations to gauge smokers' response to program modifications which may produce barriers to participation. It also highlights methodological issues encountered in conducting this work. Computer simulations were based on conjoint analysis, a consumer research method which enables measurement of smokers' relative preference for various modifications of cessation programs. Results from two studies are presented in this paper. The primary study used a randomly selected sample of 218 adult smokers who participated in a computer-assisted phone interview. Initially, the study assessed smokers' relative utility rating of 30 features of cessation programs. Utility data were used in computer-simulated comparisons of a low-cost, self-help oriented program under development and five other existing programs. A baseline version of the program under development and two modifications (for example, use of a support group with a higher level of cost) were simulated. Both the baseline version and modifications received a favorable response vis-à-vis comparison programs. Modifications requiring higher program costs were, however, associated with moderately reduced levels of favorable consumer response. The second study used a sample of 70 smokers who responded to an expanded set of smoking cessation program features focusing on program packaging. This secondary study incorporate in-person, computer-assisted interviews at a shopping mall, with smokers viewing an artist's mock-up of various program options on display. A similar pattern of responses to simulated program modifications emerged, with monetary cost apparently playing a key role. The significance of conjoint-based computer simulation as a tool in program development or dissemination, salient methodological issues, and implications for further research are discussed. PMID:1738813

  4. Simulating smokers' acceptance of modifications in a cessation program.

    PubMed

    Spoth, R

    1992-01-01

    Recent research has underscored the importance of assessing barriers to smokers' acceptance of cessation programs. This paper illustrates the use of computer simulations to gauge smokers' response to program modifications which may produce barriers to participation. It also highlights methodological issues encountered in conducting this work. Computer simulations were based on conjoint analysis, a consumer research method which enables measurement of smokers' relative preference for various modifications of cessation programs. Results from two studies are presented in this paper. The primary study used a randomly selected sample of 218 adult smokers who participated in a computer-assisted phone interview. Initially, the study assessed smokers' relative utility rating of 30 features of cessation programs. Utility data were used in computer-simulated comparisons of a low-cost, self-help oriented program under development and five other existing programs. A baseline version of the program under development and two modifications (for example, use of a support group with a higher level of cost) were simulated. Both the baseline version and modifications received a favorable response vis-à-vis comparison programs. Modifications requiring higher program costs were, however, associated with moderately reduced levels of favorable consumer response. The second study used a sample of 70 smokers who responded to an expanded set of smoking cessation program features focusing on program packaging. This secondary study incorporate in-person, computer-assisted interviews at a shopping mall, with smokers viewing an artist's mock-up of various program options on display. A similar pattern of responses to simulated program modifications emerged, with monetary cost apparently playing a key role. The significance of conjoint-based computer simulation as a tool in program development or dissemination, salient methodological issues, and implications for further research are discussed.

  5. 10 CFR 961.11 - Text of the contract.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... program including information on cost projections, project plans and progress reports. 5. (a) Beginning on...-type documents or computer software (including computer programs, computer software data bases, and computer software documentation). Examples of technical data include research and engineering data...

  6. 10 CFR 961.11 - Text of the contract.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... program including information on cost projections, project plans and progress reports. 5. (a) Beginning on...-type documents or computer software (including computer programs, computer software data bases, and computer software documentation). Examples of technical data include research and engineering data...

  7. Promoting healthy computer use among middle school students: a pilot school-based health promotion program.

    PubMed

    Ciccarelli, Marina; Portsmouth, Linda; Harris, Courtenay; Jacobs, Karen

    2012-01-01

    Introduction of notebook computers in many schools has become integral to learning. This has increased students' screen-based exposure and the potential risks to physical and visual health. Unhealthy computing behaviours include frequent and long durations of exposure; awkward postures due to inappropriate furniture and workstation layout, and ignoring computer-related discomfort. Describe the framework for a planned school-based health promotion program to encourage healthy computing behaviours among middle school students. This planned program uses a community- based participatory research approach. Students in Year 7 in 2011 at a co-educational middle school, their parents, and teachers have been recruited. Baseline data was collected on students' knowledge of computer ergonomics, current notebook exposure, and attitudes towards healthy computing behaviours; and teachers' and self-perceived competence to promote healthy notebook use among students, and what education they wanted. The health promotion program is being developed by an inter-professional team in collaboration with students, teachers and parents to embed concepts of ergonomics education in relevant school activities and school culture. End of year changes in reported and observed student computing behaviours will be used to determine the effectiveness of the program. Building a body of evidence regarding physical health benefits to students from this school-based ergonomics program can guide policy development on the healthy use of computers within children's educational environments.

  8. Computational Understanding: Analysis of Sentences and Context

    DTIC Science & Technology

    1974-05-01

    Computer Science Department Stanford, California 9430b 10- PROGRAM ELEMENT. PROJECT. TASK AREA « WORK UNIT NUMBERS II. CONTROLLING OFFICE NAME...these is the need tor programs that can respond in useful ways to information expressed in a natural language. However a computational understanding...buying structure because "Mary" appears where it does. But the time for analysis was rarely over five seconds of computer time, when the Lisp program

  9. GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System

    DOE Data Explorer

    James Menart

    2013-06-07

    This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.

  10. Grand Challenges 1993: High Performance Computing and Communications. A Report by the Committee on Physical, Mathematical, and Engineering Sciences. The FY 1993 U.S. Research and Development Program.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    This report presents the United States research and development program for 1993 for high performance computing and computer communications (HPCC) networks. The first of four chapters presents the program goals and an overview of the federal government's emphasis on high performance computing as an important factor in the nation's scientific and…

  11. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  12. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  13. The engineering design integration (EDIN) system. [digital computer program complex

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  14. Student Use of Physics to Make Sense of Incomplete but Functional VPython Programs in a Lab Setting

    NASA Astrophysics Data System (ADS)

    Weatherford, Shawn A.

    2011-12-01

    Computational activities in Matter & Interactions, an introductory calculus-based physics course, have the instructional goal of providing students with the experience of applying the same set of a small number of fundamental principles to model a wide range of physical systems. However there are significant instructional challenges for students to build computer programs under limited time constraints, especially for students who are unfamiliar with programming languages and concepts. Prior attempts at designing effective computational activities were successful at having students ultimately build working VPython programs under the tutelage of experienced teaching assistants in a studio lab setting. A pilot study revealed that students who completed these computational activities had significant difficultly repeating the exact same tasks and further, had difficulty predicting the animation that would be produced by the example program after interpreting the program code. This study explores the interpretation and prediction tasks as part of an instructional sequence where students are asked to read and comprehend a functional, but incomplete program. Rather than asking students to begin their computational tasks with modifying program code, we explicitly ask students to interpret an existing program that is missing key lines of code. The missing lines of code correspond to the algebraic form of fundamental physics principles or the calculation of forces which would exist between analogous physical objects in the natural world. Students are then asked to draw a prediction of what they would see in the simulation produced by the VPython program and ultimately run the program to evaluate the students' prediction. This study specifically looks at how the participants use physics while interpreting the program code and creating a whiteboard prediction. This study also examines how students evaluate their understanding of the program and modification goals at the beginning of the modification task. While working in groups over the course of a semester, study participants were recorded while they completed three activities using these incomplete programs. Analysis of the video data showed that study participants had little difficulty interpreting physics quantities, generating a prediction, or determining how to modify the incomplete program. Participants did not base their prediction solely from the information from the incomplete program. When participants tried to predict the motion of the objects in the simulation, many turned to their knowledge of how the system would evolve if it represented an analogous real-world physical system. For example, participants attributed the real-world behavior of springs to helix objects even though the program did not include calculations for the spring to exert a force when stretched. Participants rarely interpreted lines of code in the computational loop during the first computational activity, but this changed during latter computational activities with most participants using their physics knowledge to interpret the computational loop. Computational activities in the Matter & Interactions curriculum were revised in light of these findings to include an instructional sequence of tasks to build a comprehension of the example program. The modified activities also ask students to create an additional whiteboard prediction for the time-evolution of the real-world phenomena which the example program will eventually model. This thesis shows how comprehension tasks identified by Palinscar and Brown (1984) as effective in improving reading comprehension are also effective in helping students apply their physics knowledge to interpret a computer program which attempts to model a real-world phenomena and identify errors in their understanding of the use, or omission, of fundamental physics principles in a computational model.

  15. Care 3, phase 1, volume 2

    NASA Technical Reports Server (NTRS)

    Stiffler, J. J.; Bryant, L. A.; Guccione, L.

    1979-01-01

    A computer program was developed as a general purpose reliability tool for fault tolerant avionics systems. The computer program requirements, together with several appendices containing computer printouts are presented.

  16. Rocket exhaust plume computer program improvement. Volume 1: Summary: Method of characteristics nozzle and plume programs

    NASA Technical Reports Server (NTRS)

    Ratliff, A. W.; Smith, S. D.; Penny, N. M.

    1972-01-01

    A summary is presented of the various documents that discuss and describe the computer programs and analysis techniques which are available for rocket nozzle and exhaust plume calculations. The basic method of characteristics program is discussed, along with such auxiliary programs as the plume impingement program, the plot program and the thermochemical properties program.

  17. GEODYN system support program, volume 4. [computer program for trajectory analysis of artificial satellites

    NASA Technical Reports Server (NTRS)

    Mullins, N. E.

    1972-01-01

    The GEODYN Orbit Determination and Geodetic Parameter Estimation System consists of a set of computer programs designed to determine and analyze definitive satellite orbits and their associated geodetic and measurement parameters. This manual describes the Support Programs used by the GEODYN System. The mathematics and programming descriptions are detailed. The operational procedures of each program are presented. GEODYN ancillary analysis programs may be grouped into three different categories: (1) orbit comparison - DELTA (2) data analysis using reference orbits - GEORGE, and (3) pass geometry computations - GROUNDTRACK. All of the above three programs use one or more tapes written by the GEODYN program in either a data reduction or orbit generator run.

  18. Near-term hybrid vehicle program, phase 1. Appendix B: Design trade-off studies report. Volume 3: Computer program listings

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A description and listing is presented of two computer programs: Hybrid Vehicle Design Program (HYVELD) and Hybrid Vehicle Simulation Program (HYVEC). Both of the programs are modifications and extensions of similar programs developed as part of the Electric and Hybrid Vehicle System Research and Development Project.

  19. Fluid dynamics computer programs for NERVA turbopump

    NASA Technical Reports Server (NTRS)

    Brunner, J. J.

    1972-01-01

    During the design of the NERVA turbopump, numerous computer programs were developed for the analyses of fluid dynamic problems within the machine. Program descriptions, example cases, users instructions, and listings for the majority of these programs are presented.

  20. Use of the Computer for Research on Instruction and Student Understanding in Physics.

    NASA Astrophysics Data System (ADS)

    Grayson, Diane Jeanette

    This dissertation describes an investigation of how the computer may be utilized to perform research on instruction and on student understanding in physics. The research was conducted within three content areas: kinematics, waves and dynamics. The main focus of the research on instruction was the determination of factors needed for a computer program to be instructionally effective. The emphasis in the research on student understanding was the identification of specific conceptual and reasoning difficulties students encounter with the subject matter. Most of the research was conducted using the computer -based interview, a technique developed during the early part of the work, conducted within the domain of kinematics. In a computer-based interview, a student makes a prediction about how a particular system will behave under given circumstances, observes a simulation of the event on a computer screen, and then is asked by an interviewer to explain any discrepancy between prediction and observation. In the course of the research, a model was developed for producing educational software. The model has three important components: (i) research on student difficulties in the content area to be addressed, (ii) observations of students using the computer program, and (iii) consequent program modification. This model was used to guide the development of an instructional computer program dealing with graphical representations of transverse pulses. Another facet of the research involved the design of a computer program explicitly for the purposes of research. A computer program was written that simulates a modified Atwood's machine. The program was than used in computer -based interviews and proved to be an effective means of probing student understanding of dynamics concepts. In order to ascertain whether or not the student difficulties identified were peculiar to the computer, laboratory-based interviews with real equipment were also conducted. The laboratory-based interviews were designed to parallel the computer-based interviews as closely as possible. The results of both types of interviews are discussed in detail. The dissertation concludes with a discussion of some of the benefits of using the computer in physics instruction and physics education research. Attention is also drawn to some of the limitations of the computer as a research instrument or instructional device.

  1. An Integrated Development Environment for Adiabatic Quantum Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Bennink, Ryan S

    2014-01-01

    Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less

  2. The Road to Computer Literacy. Part V: Objectives and Activities for Grades 10-12.

    ERIC Educational Resources Information Center

    Bitter, Gary

    1983-01-01

    Presents computer-oriented activities in computer awareness and programing for students in grades 10-12. Intended for use by teachers of all disciplines, activities include such topics as prediction, interpretation and generalization of data, computer systems, PASCAL and PILOT programing, sampling techniques, computer survival, invasion of…

  3. 38 CFR 21.8074 - Computing the period for vocational training program participation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Computing the period for... Vocational Training § 21.8074 Computing the period for vocational training program participation. (a) Computing the participation period. To compute the number of months and days of an eligible child's...

  4. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  5. Computer Languages: A Practical Guide to the Chief Programming Languages.

    ERIC Educational Resources Information Center

    Sanderson, Peter C.

    All the most commonly-used high-level computer languages are discussed in this book. An introductory discussion provides an overview of the basic components of a digital computer, the general planning of a computer programing problem, and the various types of computer languages. Each chapter is self-contained, emphasizes those features of a…

  6. Automated procedure for developing hybrid computer simulations of turbofan engines. Part 1: General description

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.; Bruton, W. M.

    1982-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.

  7. A system for the input and storage of data in the Besm-6 digital computer

    NASA Technical Reports Server (NTRS)

    Schmidt, K.; Blenke, L.

    1975-01-01

    Computer programs used for the decoding and storage of large volumes of data on the the BESM-6 computer are described. The following factors are discussed: the programming control language allows the programs to be run as part of a modular programming system used in data processing; data control is executed in a hierarchically built file on magnetic tape with sequential index storage; and the programs are not dependent on the structure of the data.

  8. 77 FR 38610 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-28

    ... DEPARTMENT OF EDUCATION Privacy Act of 1974; Computer Matching Program AGENCY: Department of Education. ACTION: Notice--Computer matching agreement between the Department of Education and the Department of Defense. SUMMARY: This document provides notice of the continuation of the computer matching...

  9. Computers in Engineering Teaching.

    ERIC Educational Resources Information Center

    Rushby, N. J.

    This bibliography cites 26 books, papers, and reports dealing with various uses of computers in engineering education; and describes several computer programs available for use in teaching aeronautical, chemical, civil, electrical and electronic, mechanical, and nuclear engineering. Each computer program entry is presented by name, author,…

  10. Influence of direct computer experience on older adults' attitudes toward computers.

    PubMed

    Jay, G M; Willis, S L

    1992-07-01

    This research examined whether older adults' attitudes toward computers became more positive as a function of computer experience. The sample comprised 101 community-dwelling older adults aged 57 to 87. The intervention involved a 2-week computer training program in which subjects learned to use a desktop publishing software program. A multidimensional computer attitude measure was used to assess differential attitude change and maintenance of change following training. The results indicated that older adults' computer attitudes are modifiable and that direct computer experience is an effective means of change. Attitude change as a function of training was found for the attitude dimensions targeted by the intervention program: computer comfort and efficacy. In addition, maintenance of attitude change was established for at least two weeks following training.

  11. Improved neutron activation prediction code system development

    NASA Technical Reports Server (NTRS)

    Saqui, R. M.

    1971-01-01

    Two integrated neutron activation prediction code systems have been developed by modifying and integrating existing computer programs to perform the necessary computations to determine neutron induced activation gamma ray doses and dose rates in complex geometries. Each of the two systems is comprised of three computational modules. The first program module computes the spatial and energy distribution of the neutron flux from an input source and prepares input data for the second program which performs the reaction rate, decay chain and activation gamma source calculations. A third module then accepts input prepared by the second program to compute the cumulative gamma doses and/or dose rates at specified detector locations in complex, three-dimensional geometries.

  12. RAWINPROC: Computer program for decommutating, interpreting, and interpolating Rawinsonde meteorological balloon sounding data

    NASA Technical Reports Server (NTRS)

    Staffanson, F. L.

    1981-01-01

    The FORTRAN computer program RAWINPROC accepts output from NASA Wallops computer program METPASS1; and produces input for NASA computer program 3.0.0700 (ECC-PRD). The three parts together form a software system for the completely automatic reduction of standard RAWINSONDE sounding data. RAWINPROC pre-edits the 0.1-second data, including time-of-day, azimuth, elevation, and sonde-modulated tone frequency, condenses the data according to successive dwells of the tone frequency, decommutates the condensed data into the proper channels (temperature, relative humidity, high and low references), determines the running baroswitch contact number and computes the associated pressure altitudes, and interpolates the data appropriate for input to ACC-PRD.

  13. An adaptive maneuvering logic computer program for the simulation of one-on-one air-to-air combat. Volume 1: General description

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Fogel, L. J.; Phelps, J. P.

    1975-01-01

    A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.

  14. A tangible programming tool for children to cultivate computational thinking.

    PubMed

    Wang, Danli; Wang, Tingting; Liu, Zhen

    2014-01-01

    Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5-9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity.

  15. An experimental and theoretical investigation of deposition patterns from an agricultural airplane

    NASA Technical Reports Server (NTRS)

    Morris, D. J.; Croom, C. C.; Vandam, C. P.; Holmes, B. J.

    1984-01-01

    A flight test program has been conducted with a representative agricultural airplane to provide data for validating a computer program model which predicts aerially applied particle deposition. Test procedures and the data from this test are presented and discussed. The computer program features are summarized, and comparisons of predicted and measured particle deposition are presented. Applications of the computer program for spray pattern improvement are illustrated.

  16. Increasing productivity of the McAuto CAD/CAE system by user-specific applications programming

    NASA Technical Reports Server (NTRS)

    Plotrowski, S. M.; Vu, T. H.

    1985-01-01

    Significant improvements in the productivity of the McAuto Computer-Aided Design/Computer-Aided Engineering (CAD/CAE) system were achieved by applications programming using the system's own Graphics Interactive Programming language (GRIP) and the interface capabilities with the main computer on which the system resides. The GRIP programs for creating springs, bar charts, finite element model representations and aiding management planning are presented as examples.

  17. Digital optical computers at the optoelectronic computing systems center

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  18. COSMIC: A catalog of selected computer programs

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Information is presented on various computer programs developed in the space program which are now available to the public. Many programs from the Department of Defense and selected software from other government agencies are also offered. Over 1500 programs in almost every technical or managerial discipline are available.

  19. Computer Literacy of Turkish Preservice Teachers in Different Teacher Training Programs

    ERIC Educational Resources Information Center

    Ozsevgec, Tuncay

    2011-01-01

    This paper reports on an investigation into the sophomore and senior preservice teachers' computer literacy in different teacher training programs and to determine relationship between grades and the teacher training programs in terms of their computer literacy. The study used case study research methodology, and the sample consisted of 276…

  20. A Computer Course for Business Students: Teacher's Guide.

    ERIC Educational Resources Information Center

    Waterhouse, Ann

    This teacher's guide is for a course designed to teach business students the fundamentals of the BASIC language and computer programming using a series of business-oriented programs. Each lesson contains an introduction, flow charts, and computer programs. The six lesson topics are print-out and format control, count-average, withholding tax…

  1. Automated computer grading of hardwood lumber

    Treesearch

    P. Klinkhachorn; J.P. Franklin; Charles W. McMillin; R.W. Conners; H.A. Huber

    1988-01-01

    This paper describes an improved computer program to grade hardwood lumber. The program was created as part of a system to automate various aspects of the hardwood manufacturing industry. It enhances previous efforts by considering both faces of the board and provides easy application of species dependent rules. The program can be readily interfaced with a computer...

  2. 77 FR 33547 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Centers for Medicare and Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-06

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2012-0015] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Centers for Medicare and Medicaid Services (CMS))--Match Number 1094 AGENCY: Social Security Administration (SSA). ACTION: Notice of a new computer matching program that will expire...

  3. Computer program user's manual for advanced general aviation propeller study

    NASA Technical Reports Server (NTRS)

    Worobel, R.

    1972-01-01

    A user's manual is presented for a computer program for predicting the performance (static, flight, and reverse), noise, weight and cost of propellers for advanced general aviation aircraft of the 1980 time period. Complete listings of this computer program with detailed instructions and samples of input and output are included.

  4. Analysis of rocket engine injection combustion processes

    NASA Technical Reports Server (NTRS)

    Salmon, J. W.; Saltzman, D. H.

    1977-01-01

    Mixing methodology improvement for the JANNAF DER and CICM injection/combustion analysis computer programs was accomplished. ZOM plane prediction model development was improved for installation into the new standardized DER computer program. An intra-element mixing model developing approach was recommended for gas/liquid coaxial injection elements for possible future incorporation into the CICM computer program.

  5. Developing Computer Programming Concepts and Skills via Technology-Enriched Language-Art Projects: A Case Study

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2010-01-01

    Teaching computer programming to young children has been considered difficult because of its abstract and complex nature. The objectives of this study are (1) to investigate whether an innovative educational technology tool called Scratch could enable young children to learn abstract knowledge of computer programming while creating multimedia…

  6. Basic BASIC; An Introduction to Computer Programming in BASIC Language.

    ERIC Educational Resources Information Center

    Coan, James S.

    With the increasing availability of computer access through remote terminals and time sharing, more and more schools and colleges are able to introduce programing to substantial numbers of students. This book is an attempt to incorporate computer programming, using BASIC language, and the teaching of mathematics. The general approach of the book…

  7. 48 CFR 227.7104 - Contracts under the Small Business Innovation Research (SBIR) Program.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Data and Computer Software—Small Business Innovation Research (SBIR) Program, when technical data or computer software will be generated during performance of contracts under the SBIR program. (b) Under the clause at 252.227-7018, the Government obtains SBIR data rights in technical data and computer software...

  8. 48 CFR 227.7104 - Contracts under the Small Business Innovation Research (SBIR) Program.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Data and Computer Software—Small Business Innovation Research (SBIR) Program, when technical data or computer software will be generated during performance of contracts under the SBIR program. (b) Under the clause at 252.227-7018, the Government obtains SBIR data rights in technical data and computer software...

  9. A general program to compute the multivariable stability margin for systems with parametric uncertainty

    NASA Technical Reports Server (NTRS)

    Sanchez Pena, Ricardo S.; Sideris, Athanasios

    1988-01-01

    A computer program implementing an algorithm for computing the multivariable stability margin to check the robust stability of feedback systems with real parametric uncertainty is proposed. The authors present in some detail important aspects of the program. An example is presented using lateral directional control system.

  10. 48 CFR 252.227-7018 - Rights in noncommercial technical data and computer software-Small Business Innovation Research...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... technical data and computer software-Small Business Innovation Research (SBIR) Program. 252.227-7018 Section... Innovation Research (SBIR) Program. As prescribed in 227.7104(a), use the following clause: Rights in Noncommercial Technical Data and Computer Software—Small Business Innovation Research (SBIR) Program (JUN 1995...

  11. 48 CFR 252.227-7018 - Rights in noncommercial technical data and computer software-Small Business Innovation Research...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... technical data and computer software-Small Business Innovation Research (SBIR) Program. 252.227-7018 Section... Innovation Research (SBIR) Program. As prescribed in 227.7104(a), use the following clause: Rights in Noncommercial Technical Data and Computer Software—Small Business Innovation Research (SBIR) Program (FEB 2014...

  12. 48 CFR 252.227-7018 - Rights in noncommercial technical data and computer software-Small Business Innovation Research...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... technical data and computer software-Small Business Innovation Research (SBIR) Program. 252.227-7018 Section... Innovation Research (SBIR) Program. As prescribed in 227.7104(a), use the following clause: Rights in Noncommercial Technical Data and Computer Software—Small Business Innovation Research (SBIR) Program (MAR 2011...

  13. 48 CFR 252.227-7018 - Rights in noncommercial technical data and computer software-Small Business Innovation Research...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... technical data and computer software-Small Business Innovation Research (SBIR) Program. 252.227-7018 Section... Innovation Research (SBIR) Program. As prescribed in 227.7104(a), use the following clause: Rights in Noncommercial Technical Data and Computer Software—Small Business Innovation Research (SBIR) Program (MAY 2013...

  14. 77 FR 34941 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-12

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, DoD. ACTION: Notice of a... computer matching program are the Department of Veterans Affairs (VA) and the Defense Manpower Data Center... identified as DMDC 01, entitled ``Defense Manpower Data Center Data Base,'' last published in the Federal...

  15. 77 FR 35432 - Privacy Act of 1974, Computer Matching Program: United States Postal Service and the Defense...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-13

    ... the Defense Manpower Data Center, Department of Defense AGENCY: Postal Service TM . ACTION: Notice of Computer Matching Program--United States Postal Service and the Defense Manpower Data Center, Department of... as the recipient agency in a computer matching program with the Defense Manpower Data Center (DMDC...

  16. Adult Learning in a Computer-Based ESL Acquisition Program

    ERIC Educational Resources Information Center

    Sanchez, Karen Renee

    2013-01-01

    This study explores the self-efficacy of students learning English as a Second Language on the computer-based Rosetta Stone program. The research uses a qualitative approach to explore how a readily available computer-based learning program, Rosetta Stone, can help adult immigrant students gain some English competence and so acquire a greater…

  17. 22 CFR 1101.4 - Reports on new systems of records; computer matching programs.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 2 2012-04-01 2009-04-01 true Reports on new systems of records; computer matching programs. 1101.4 Section 1101.4 Foreign Relations INTERNATIONAL BOUNDARY AND WATER COMMISSION... records; computer matching programs. (a) Before establishing any new systems of records, or making any...

  18. 22 CFR 1101.4 - Reports on new systems of records; computer matching programs.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 2 2014-04-01 2014-04-01 false Reports on new systems of records; computer matching programs. 1101.4 Section 1101.4 Foreign Relations INTERNATIONAL BOUNDARY AND WATER COMMISSION... records; computer matching programs. (a) Before establishing any new systems of records, or making any...

  19. 22 CFR 1101.4 - Reports on new systems of records; computer matching programs.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 2 2013-04-01 2009-04-01 true Reports on new systems of records; computer matching programs. 1101.4 Section 1101.4 Foreign Relations INTERNATIONAL BOUNDARY AND WATER COMMISSION... records; computer matching programs. (a) Before establishing any new systems of records, or making any...

  20. 77 FR 32709 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Department of Homeland Security...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2011-0089] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Department of Homeland Security (DHS))--Match Number 1010 AGENCY: Social Security Administration (SSA). ACTION: Notice of a renewal of an existing computer matching program that...

  1. 78 FR 37647 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Railroad Retirement Board (RRB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0010] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Railroad Retirement Board (RRB))--Match Number 1006 AGENCY: Social Security Administration. ACTION: Notice of a renewal of an existing computer matching program that will expire on...

  2. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Computer Program C Appendix C to Part...) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer Program Note: For text of appendix C see appendix C to part 67. ...

  3. 22 CFR 1101.4 - Reports on new systems of records; computer matching programs.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 2 2011-04-01 2009-04-01 true Reports on new systems of records; computer matching programs. 1101.4 Section 1101.4 Foreign Relations INTERNATIONAL BOUNDARY AND WATER COMMISSION... records; computer matching programs. (a) Before establishing any new systems of records, or making any...

  4. The Shock and Vibration Digest. Volume 14, Number 11

    DTIC Science & Technology

    1982-11-01

    cooled reactor 1981) ( HTGR ) core under seismic excitation his been developed . N82-18644 The computer program can be used to predict the behavior (In...French) of the HTGR core under seismic excitation. Key Words: Computer programs , Modal analysis, Beams, Undamped structures A computation method is...30) PROGRAMMING c c Dale and Cohen [221 extended the method of McMunn and Plunkett [201 developed a compute- McMunn and Plunkett to continuous systems

  5. Improvements in Routing for Packet-Switched Networks

    DTIC Science & Technology

    1975-02-18

    PROGRAM FOR COMPUTER SIMULATION . . 90 B.l Flow Diagram of Adaptive Routine 90 B.2 Progiam ARPSIM 93 B.3 Explanation of Variables...equa. 90 APPENDIX B ADAPTIVE ROUTING PROGRAM FOR COMPUTER SIMULA HON The computer simulation for adaptive routing was initially run on a DDP-24 small...TRANSMIT OVER AVAILABLE LINKS MESSAGES IN QUEUE COMPUTE Ni NUMBER OF ARRIVALS AT EACH NODE i AT TIME T Fig. Bla - Flow Diagram of Program Routine 92

  6. Comparison of Building Loads Analysis and System Thermodynamics (BLAST) Computer Program Simulations and Measured Energy Use for Army Buildings.

    DTIC Science & Technology

    1980-05-01

    engineering ,ZteNo D R RPTE16 research w 9 laboratory COMPARISON OF BUILDING LOADS ANALYSIS AND SYSTEM THERMODYNAMICS (BLAST) AD 0 5 5,0 3COMPUTER PROGRAM...Building Loads Analysis and System Thermodynamics (BLAST) computer program. A dental clinic and a battalion headquarters and classroom building were...Building and HVAC System Data Computer Simulation Comparison of Actual and Simulated Results ANALYSIS AND FINDINGS

  7. MPL-A program for computations with iterated integrals on moduli spaces of curves of genus zero

    NASA Astrophysics Data System (ADS)

    Bogner, Christian

    2016-06-01

    We introduce the Maple program MPL for computations with multiple polylogarithms. The program is based on homotopy invariant iterated integrals on moduli spaces M0,n of curves of genus 0 with n ordered marked points. It includes the symbol map and procedures for the analytic computation of period integrals on M0,n. It supports the automated computation of a certain class of Feynman integrals.

  8. Coal-seismic, desktop computer programs in BASIC; Part 6, Develop rms velocity functions and apply mute and normal movement

    USGS Publications Warehouse

    Hasbrouck, W.P.

    1983-01-01

    Processing of data taken with the U.S. Geological Survey's coal-seismic system is done with a desktop, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report presents computer programs used to develop rms velocity functions and apply mute and normal moveout to a 12-trace seismogram.

  9. Cost-Benefit Analysis for ECIA Chapter 1 and State DPPF Programs Comparing Groups Receiving Regular Program Instruction and Groups Receiving Computer Assisted Instruction/Computer Management System (CAI/CMS). 1986-87.

    ERIC Educational Resources Information Center

    Chamberlain, Ed

    A cost benefit study was conducted to determine the effectiveness of a computer assisted instruction/computer management system (CAI/CMS) as an alternative to conventional methods of teaching reading within Chapter 1 and DPPF funded programs of the Columbus (Ohio) Public Schools. The Chapter 1 funded Compensatory Language Experiences and Reading…

  10. Upper Grades Ideas.

    ERIC Educational Resources Information Center

    Thornburg, David; Beane, Pam

    1983-01-01

    Presents programming ideas using LOGO, activity for converting flowchart into a computer program, and a Pascal program for generating music using paddles. Includes the article "Helping Computers Adapt to Kids" by Philip Nothnagle; a program for estimating length of lines is included. (JN)

  11. Developing a computer security training program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-01-01

    We all know that training can empower the computer protection program. However, pushing computer security information outside the computer security organization into the rest of the company is often labeled as an easy project or a dungeon full of dragons. Used in part or whole, the strategy offered in this paper may help the developer of a computer security training program ward off dragons and create products and services. The strategy includes GOALS (what the result of training will be), POINTERS (tips to ensure survival), and STEPS (products and services as a means to accomplish the goals).

  12. Refinement Of Hexahedral Cells In Euler Flow Computations

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.

    1996-01-01

    Topologically Independent Grid, Euler Refinement (TIGER) computer program solves Euler equations of three-dimensional, unsteady flow of inviscid, compressible fluid by numerical integration on unstructured hexahedral coordinate grid refined where necessary to resolve shocks and other details. Hexahedral cells subdivided, each into eight smaller cells, as needed to refine computational grid in regions of high flow gradients. Grid Interactive Refinement and Flow-Field Examination (GIRAFFE) computer program written in conjunction with TIGER program to display computed flow-field data and to assist researcher in verifying specified boundary conditions and refining grid.

  13. Calculation of inviscid surface streamlines and heat transfer on shuttle type configurations. Part 2: Description of computer program

    NASA Technical Reports Server (NTRS)

    Dejarnette, F. R.; Jones, M. H.

    1971-01-01

    A description of the computer program used for heating rate calculation for blunt bodies in hypersonic flow is given. The main program and each subprogram are described by defining the pertinent symbols involved and presenting a detailed flow diagram and complete computer program listing. Input and output parameters are discussed in detail. Listings are given for the computation of heating rates on (1) a blunted 15 deg half-angle cone at 20 deg incidence and Mach 10.6, (2) a blunted 70 deg slab delta wing at 10 deg incidence and Mach 8, and (3) the HL-10 lifting body at 20 deg incidence and Mach 10. In addition, the computer program output for two streamlines on the blunted 15 deg half-angle cone is listed. For Part 1, see N71-36186.

  14. New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program

    NASA Technical Reports Server (NTRS)

    Strain, D.; Levy, R.

    1986-01-01

    The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.

  15. Protection of Computer Programs--A Dilemma.

    ERIC Educational Resources Information Center

    Carnahan, William H.

    Computer programs, as legitimate original inventions or creative written expressions, are entitled to patent or copyright protection. Understanding the legal implications of this concept is crucial to both computer programmers and their employers in our increasingly computer-oriented way of life. Basically the copyright or patent procedure…

  16. Coal-seismic, desktop computer programs in BASIC; Part 7, Display and compute shear-pair seismograms

    USGS Publications Warehouse

    Hasbrouck, W.P.

    1983-01-01

    Processing of geophysical data taken with the U.S. Geological Survey's coal-seismic system is done with a desk-top, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report discusses and presents five computer pro grams used to display and compute shear-pair seismograms.

  17. Tools for Administration of a UNIX-Based Network

    NASA Technical Reports Server (NTRS)

    LeClaire, Stephen; Farrar, Edward

    2004-01-01

    Several computer programs have been developed to enable efficient administration of a large, heterogeneous, UNIX-based computing and communication network that includes a variety of computers connected to a variety of subnetworks. One program provides secure software tools for administrators to create, modify, lock, and delete accounts of specific users. This program also provides tools for users to change their UNIX passwords and log-in shells. These tools check for errors. Another program comprises a client and a server component that, together, provide a secure mechanism to create, modify, and query quota levels on a network file system (NFS) mounted by use of the VERITAS File SystemJ software. The client software resides on an internal secure computer with a secure Web interface; one can gain access to the client software from any authorized computer capable of running web-browser software. The server software resides on a UNIX computer configured with the VERITAS software system. Directories where VERITAS quotas are applied are NFS-mounted. Another program is a Web-based, client/server Internet Protocol (IP) address tool that facilitates maintenance lookup of information about IP addresses for a network of computers.

  18. Computer Program for the Design and Off-Design Performance of Turbojet and Turbofan Engine Cycles

    NASA Technical Reports Server (NTRS)

    Morris, S. J.

    1978-01-01

    The rapid computer program is designed to be run in a stand-alone mode or operated within a larger program. The computation is based on a simplified one-dimensional gas turbine cycle. Each component in the engine is modeled thermo-dynamically. The component efficiencies used in the thermodynamic modeling are scaled for the off-design conditions from input design point values using empirical trends which are included in the computer code. The engine cycle program is capable of producing reasonable engine performance prediction with a minimum of computer execute time. The current computer execute time on the IBM 360/67 for one Mach number, one altitude, and one power setting is about 0.1 seconds. about 0.1 seconds. The principal assumption used in the calculation is that the compressor is operated along a line of maximum adiabatic efficiency on the compressor map. The fluid properties are computed for the combustion mixture, but dissociation is not included. The procedure included in the program is only for the combustion of JP-4, methane, or hydrogen.

  19. Program manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 1 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System (SEPS) computer program is considered in terms of the program manual, programmer guide, and program utilization. The main objective is to provide the information necessary to interpret and use the routines comprising the SEPS program. Subroutine descriptions including the name, purpose, method, variable definitions, and logic flow are presented.

  20. Development and evaluation of a computer program to grade student performance on peripheral blood smears

    NASA Astrophysics Data System (ADS)

    Lehman, Donald Clifford

    Today's medical laboratories are dealing with cost containment health care policies and unfilled laboratory positions. Because there may be fewer experienced clinical laboratory scientists, students graduating from clinical laboratory science (CLS) programs are expected by their employers to perform accurately in entry-level positions with minimal training. Information in the CLS field is increasing at a dramatic rate, and instructors are expected to teach more content in the same amount of time with the same resources. With this increase in teaching obligations, instructors could use a tool to facilitate grading. The research question was, "Can computer-assisted assessment evaluate students in an accurate and time efficient way?" A computer program was developed to assess CLS students' ability to evaluate peripheral blood smears. Automated grading permits students to get results quicker and allows the laboratory instructor to devote less time to grading. This computer program could improve instruction by providing more time to students and instructors for other activities. To be valuable, the program should provide the same quality of grading as the instructor. These benefits must outweigh potential problems such as the time necessary to develop and maintain the program, monitoring of student progress by the instructor, and the financial cost of the computer software and hardware. In this study, surveys of students and an interview with the laboratory instructor were performed to provide a formative evaluation of the computer program. In addition, the grading accuracy of the computer program was examined. These results will be used to improve the program for use in future courses.

  1. Structured Design Language for Computer Programs

    NASA Technical Reports Server (NTRS)

    Pace, Walter H., Jr.

    1986-01-01

    Box language used at all stages of program development. Developed to provide improved productivity in designing, coding, and maintaining computer programs. BOX system written in FORTRAN 77 for batch execution.

  2. Computer Literacy and the Library: A New Connection.

    ERIC Educational Resources Information Center

    Fenske, Rachel F.

    1998-01-01

    Describes a program at Eastern Washington University that integrates library skills as a component of the English composition program, and is part of a computer-literacy program stemming from a general-education curriculum reform. Discusses program development and design, assessment of student learning, and effectiveness of the program. (LRW)

  3. Conceptions of Programming: A Study into Learning To Program.

    ERIC Educational Resources Information Center

    Booth, Shirley

    This paper reports the results of a phenomenographic study which focused on identifying and describing the conceptions of programming and related phenomena of about 120 computer science and computer engineering students learning to program. The report begins by tracing developments in the students' conceptions of programming and its parts, and…

  4. Video and Computer Technologies for Extended-Campus Programming.

    ERIC Educational Resources Information Center

    Sagan, Edgar L.; And Others

    This paper discusses video and computer technologies for extended-campus programming (courses and programs at off-campus sites). The first section provides an overview of the distance education program at the University of Kentucky (UK), and highlights the improved access to graduate and professional programs, advances in technology, funding,…

  5. Development of Integrated Programs for Aerospace-vehicle design (IPAD): Integrated information processing requirements

    NASA Technical Reports Server (NTRS)

    Southall, J. W.

    1979-01-01

    The engineering-specified requirements for integrated information processing by means of the Integrated Programs for Aerospace-Vehicle Design (IPAD) system are presented. A data model is described and is based on the design process of a typical aerospace vehicle. General data management requirements are specified for data storage, retrieval, generation, communication, and maintenance. Information management requirements are specified for a two-component data model. In the general portion, data sets are managed as entities, and in the specific portion, data elements and the relationships between elements are managed by the system, allowing user access to individual elements for the purpose of query. Computer program management requirements are specified for support of a computer program library, control of computer programs, and installation of computer programs into IPAD.

  6. THERMTRAJ: A FORTRAN program to compute the trajectory and gas film temperatures of zero pressure balloons

    NASA Technical Reports Server (NTRS)

    Horn, W. J.; Carlson, L. A.

    1983-01-01

    A FORTRAN computer program called THERMTRAJ is presented which can be used to compute the trajectory of high altitude scientific zero pressure balloons from launch through all subsequent phases of the balloon flight. In addition, balloon gas and film temperatures can be computed at every point of the flight. The program has the ability to account for ballasting, changes in cloud cover, variable atmospheric temperature profiles, and both unconditional valving and scheduled valving of the balloon gas. The program was verified for an extensive range of balloon sizes (from 0.5 to 41.47 million cubic feet). Instructions on program usage, listing of the program source deck, input data and printed and plotted output for a verification case are included.

  7. Computer program for a four-cylinder-Stirling-engine controls simulation

    NASA Technical Reports Server (NTRS)

    Daniels, C. J.; Lorenzo, C. F.

    1982-01-01

    A four cylinder Stirling engine, transient engine simulation computer program is presented. The program is intended for controls analysis. The associated engine model was simplified to shorten computer calculation time. The model includes engine mechanical drive dynamics and vehicle load effects. The computer program also includes subroutines that allow: (1) acceleration of the engine by addition of hydrogen to the system, and (2) braking of the engine by short circuiting of the working spaces. Subroutines to calculate degraded engine performance (e.g., due to piston ring and piston rod leakage) are provided. Input data required to run the program are described and flow charts are provided. The program is modular to allow easy modification of individual routines. Examples of steady state and transient results are presented.

  8. Computer program for calculating supersonic flow about circular, elliptic, and bielliptic cones by the method of lines

    NASA Technical Reports Server (NTRS)

    Klunker, E. B.; South, J. C., Jr.; Davis, R. M.

    1972-01-01

    A user's manual for a computer program which calculates the supersonic flow about circular, elliptic, and bielliptic cones at incidence and elliptic cones at yaw by the method of lines is presented. The program is automated to compute a case from known or easily calculated solution by changing the parameters through a sequence of steps. It provides information including the shock shape, flow field, isentropic surface properties, entropy layer, and force coefficients. A description of the program operation, sample computations, and a FORTRAN 4 listing are presented.

  9. Generalized environmental control and life support system computer program (G189A) configuration control. [computer subroutine libraries for shuttle orbiter analyses

    NASA Technical Reports Server (NTRS)

    Blakely, R. L.

    1973-01-01

    A G189A simulation of the shuttle orbiter EC/lSS was prepared and used to study payload support capabilities. Two master program libraries of the G189A computer program were prepared for the NASA/JSC computer system. Several new component subroutines were added to the G189A program library and many existing subroutines were revised to improve their capabilities. A number of special analyses were performed in support of a NASA/JSC shuttle orbiter EC/LSS payload support capability study.

  10. GYC: A program to compute the turbulent boundary layer on a rotating cone

    NASA Technical Reports Server (NTRS)

    Sullivan, R. D.

    1976-01-01

    A computer program, GYC, which is capable of computing the properties of a compressible turbulent boundary layer on a rotating axisymmetric cone-cylinder body, according to the principles of invariant modeling was studied. The program is extended to include the calculation of the turbulence scale by a differential equation. GYC is in operation on the CDC-7600 computer and has undergone several corrections and improvements as a result of the experience gained. The theoretical basis for the program and the method of implementation, as well as information on its operation are given.

  11. Transpiration and film cooling boundary layer computer program. Volume 2: Computer program and user's manual

    NASA Technical Reports Server (NTRS)

    Gloss, R. J.

    1971-01-01

    A finite difference turbulent boundary layer computer program which allows for mass transfer wall cooling and equilibrium chemistry effects is presented. The program is capable of calculating laminar or turbulent boundary layer solutions for an arbitrary ideal gas or an equilibrium hydrogen oxygen system. Either two dimensional or axisymmetric geometric configurations may be considered. The equations are solved, in nondimension-alized physical coordinates, using the implicit Crank-Nicolson technique. The finite difference forms of the conservation of mass, momentum, total enthalpy and elements equations are linearized and uncoupled, thereby generating easily solvable tridiagonal sets of algebraic equations. A detailed description of the computer program, as well as a program user's manual is provided. Detailed descriptions of all boundary layer subroutines are included, as well as a section defining all program symbols of principal importance. Instructions are then given for preparing card input to the program and for interpreting the printed output. Finally, two sample cases are included to illustrate the use of the program.

  12. Computer Recreations.

    ERIC Educational Resources Information Center

    Dewdney, A. K.

    1989-01-01

    Reviews the performance of computer programs for writing poetry and prose, including MARK V. SHANEY, MELL, POETRY GENERATOR, THUNDER THOUGHT, and ORPHEUS. Discusses the writing principles of the programs. Provides additional information on computer magnification techniques. (YP)

  13. My Computer Is Learning.

    ERIC Educational Resources Information Center

    Good, Ron

    1986-01-01

    Describes instructional uses of computer programs found in David Heiserman's book "Projects in Machine Intelligence for Your Home Computer." The programs feature "creatures" of various colors that move around within a rectangular white border. (JN)

  14. Protecting Your Computer from Viruses

    ERIC Educational Resources Information Center

    Descy, Don E.

    2006-01-01

    A computer virus is defined as a software program capable of reproducing itself and usually capable of causing great harm to files or other programs on the same computer. The existence of computer viruses--or the necessity of avoiding viruses--is part of using a computer. With the advent of the Internet, the door was opened wide for these…

  15. The Computer as a Teaching Aid for Eleventh Grade Mathematics: A Comparison Study.

    ERIC Educational Resources Information Center

    Kieren, Thomas Ervin

    To determine the effect of learning computer programming and the use of a computer on mathematical achievement of eleventh grade students, for each of two years, average and above average students were randomly assigned to an experimental and control group. The experimental group wrote computer programs and used the output from the computer in…

  16. Computer Technology for Industry

    NASA Technical Reports Server (NTRS)

    1979-01-01

    In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.

  17. Programs for Fundamentals of Chemistry.

    ERIC Educational Resources Information Center

    Gallardo, Julio; Delgado, Steven

    This document provides computer programs, written in BASIC PLUS, for presenting fundamental or remedial college chemistry students with chemical problems in a computer assisted instructional program. Programs include instructions, a sample run, and 14 separate practice sessions covering: mathematical operations, using decimals, solving…

  18. Changing the Paradigm: Preparing Students for the Computing Profession in the 21st Century

    NASA Technical Reports Server (NTRS)

    Robbins, Kay A.

    2003-01-01

    The dramatic technological developments of the past decade have led to a tremendous growth in the demand for computer science professionals well-versed in advanced technology and techniques. NASA, traditionally a haven for cutting-edge innovators, is now competing with every industrial and government sector for computer science talent. The computer science program at University of Texas at San Antonio (UTSA) faces challenges beyond those intrinsically presented by rapid technological change, because a significant number of UTSA students come from low-income families with no Internet or computer access at home. An examination of enrollment statistics for the computer science program at UTSA showed that very few students who entered as freshmen successfully graduated. The upper division courses appeared to be populated by graduate students removing deficiencies and by transfer students. The faculty was also concerned that the students who did graduate from the program did not have the strong technical and programming skills that the CS program had been noted for in the community during the 1980's.

  19. VASCOMP 2. The V/STOL aircraft sizing and performance computer program. Volume 6: User's manual, revision 3

    NASA Technical Reports Server (NTRS)

    Schoen, A. H.; Rosenstein, H.; Stanzione, K.; Wisniewski, J. S.

    1980-01-01

    This report describes the use of the V/STOL Aircraft Sizing and Performance Computer Program (VASCOMP II). The program is useful in performing aircraft parametric studies in a quick and cost efficient manner. Problem formulation and data development were performed by the Boeing Vertol Company and reflects the present preliminary design technology. The computer program, written in FORTRAN IV, has a broad range of input parameters, to enable investigation of a wide variety of aircraft. User oriented features of the program include minimized input requirements, diagnostic capabilities, and various options for program flexibility.

  20. Automatic system for computer program documentation

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.; Elliott, R. W.; Arseven, S.; Colunga, D.

    1972-01-01

    Work done on a project to design an automatic system for computer program documentation aids was made to determine what existing programs could be used effectively to document computer programs. Results of the study are included in the form of an extensive bibliography and working papers on appropriate operating systems, text editors, program editors, data structures, standards, decision tables, flowchart systems, and proprietary documentation aids. The preliminary design for an automated documentation system is also included. An actual program has been documented in detail to demonstrate the types of output that can be produced by the proposed system.

Top