Science.gov

Sample records for advanced computing scidac

  1. SciDAC Advances and Applications in Computational Beam Dynamics

    SciTech Connect

    Ryne, R.; Abell, D.; Adelmann, A.; Amundson, J.; Bohn, C.; Cary, J.; Colella, P.; Dechow, D.; Decyk, V.; Dragt, A.; Gerber, R.; Habib, S.; Higdon, D.; Katsouleas, T.; Ma, K.-L.; McCorquodale, P.; Mihalcea, D.; Mitchell, C.; Mori, W.; Mottershead, C.T.; Neri, F.; Pogorelov, I.; Qiang, J.; Samulyak, R.; Serafini, D.; Shalf, J.; Siegerist, C.; Spentzouris, P.; Stoltz, P.; Terzic, B.; Venturini, M.; Walstrom, P.

    2005-06-26

    SciDAC has had a major impact on computational beam dynamics and the design of particle accelerators. Particle accelerators--which account for half of the facilities in the DOE Office of Science Facilities for the Future of Science 20 Year Outlook--are crucial for US scientific, industrial, and economic competitiveness. Thanks to SciDAC, accelerator design calculations that were once thought impossible are now carried routinely, and new challenging and important calculations are within reach. SciDAC accelerator modeling codes are being used to get the most science out of existing facilities, to produce optimal designs for future facilities, and to explore advanced accelerator concepts that may hold the key to qualitatively new ways of accelerating charged particle beams. In this poster we present highlights from the SciDAC Accelerator Science and Technology (AST) project Beam Dynamics focus area in regard to algorithm development, software development, and applications.

  2. OPENING REMARKS: SciDAC: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2005-01-01

    Good morning. Welcome to SciDAC 2005 and San Francisco. SciDAC is all about computational science and scientific discovery. In a large sense, computational science characterizes SciDAC and its intent is change. It transforms both our approach and our understanding of science. It opens new doors and crosses traditional boundaries while seeking discovery. In terms of twentieth century methodologies, computational science may be said to be transformational. There are a number of examples to this point. First are the sciences that encompass climate modeling. The application of computational science has in essence created the field of climate modeling. This community is now international in scope and has provided precision results that are challenging our understanding of our environment. A second example is that of lattice quantum chromodynamics. Lattice QCD, while adding precision and insight to our fundamental understanding of strong interaction dynamics, has transformed our approach to particle and nuclear science. The individual investigator approach has evolved to teams of scientists from different disciplines working side-by-side towards a common goal. SciDAC is also undergoing a transformation. This meeting is a prime example. Last year it was a small programmatic meeting tracking progress in SciDAC. This year, we have a major computational science meeting with a variety of disciplines and enabling technologies represented. SciDAC 2005 should position itself as a new corner stone for Computational Science and its impact on science. As we look to the immediate future, FY2006 will bring a new cycle to SciDAC. Most of the program elements of SciDAC will be re-competed in FY2006. The re-competition will involve new instruments for computational science, new approaches for collaboration, as well as new disciplines. There will be new opportunities for virtual experiments in carbon sequestration, fusion, and nuclear power and nuclear waste, as well as collaborations

  3. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    SciTech Connect

    Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..; Easter, Richard C; Elliott, Scott M.; Ghan, Steven J.; Liu, Xiaohong; Lowrie, Robert B.; Lucas, Donald D.; Ma, Po-lun; Sacks, William J.; Shrivastava, Manish; Singh, Balwinder; Tautges, Timothy J.; Taylor, Mark A.; Vertenstein, Mariana; Worley, Patrick H.

    2014-01-15

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  4. National Computational Infrastructure for Lattice Gauge Theory SciDAC-2 Closeout Report Indiana University Component

    SciTech Connect

    Gottlieb, Steven Arthur; DeTar, Carleton; Tousaint, Doug

    2014-07-24

    This is the closeout report for the Indiana University portion of the National Computational Infrastructure for Lattice Gauge Theory project supported by the United States Department of Energy under the SciDAC program. It includes information about activities at Indian University, the University of Arizona, and the University of Utah, as those three universities coordinated their activities.

  5. SciDAC advances in beam dynamics simulation: from light sources to colliders

    SciTech Connect

    Qiang, J.; Borland, M.; Kabel, A.; Li, Rui; Ryne, Robert; Stern, E.; Wang, Y.; Wasserman, H.; Zhang, Y.

    2008-08-01

    In this paper, we report on progress that has been made in beam dynamics simulation, from light sources to colliders, during the first year of the SciDAC-2 accelerator project 'Community Petascale Project for Accelerator Science and Simulation (ComPASS).' Several parallel computational tools for beam dynamics simulation are described. Also presented are number of applications in current and future accelerator facilities (e.g., LCLS, RHIC, Tevatron, LHC, and ELIC).

  6. Preface: SciDAC 2009

    NASA Astrophysics Data System (ADS)

    Simon, Horst

    2009-07-01

    By almost any measure, the SciDAC community has come a long way since DOE launched the SciDAC program back in 2001. At the time, we were grappling with how to efficiently run applications on terascale systems (the November 2001 TOP500 list was led by DOE's ASCI White IBM system at Lawrence Livermore achieving 7.2 teraflop/s). And the results stemming from the first round of SciDAC projects were summed up in two-page reports. The scientific results were presented at annual meetings, which were by invitation only and typically were attended by about 75 researchers. Fast forward to 2009 and we now have SciDAC Review, a quarterly magazine showcasing the scientific computing contributions of SciDAC projects and related programs, all focused on presenting a comprehensive look at Scientific Discovery through Advanced Computing. That is also the motivation behind the annual SciDAC conference that in 2009 was held from June 14-18 in San Diego. The annual conference, which can also be described as a celebration of all things SciDAC, grew out those meetings organized in the early days of the program. In 2005, the meeting was held in San Francisco and attendance was opened up to all members of the SciDAC community. The schedule was also expanded to include a keynote address, plenary speakers and other features found in a conference format. This year marks the fifth such SciDAC conference, which now comprises four days of computational science presentations, multiple poster sessions and, since last year, an evening event showcasing simulations and modeling runs resulting from SciDAC projects. The fifth annual SciDAC conference was remarkable on several levels. The primary purpose, of course, is to showcase the research accomplishments resulting from SciDAC programs in particular and computational science in general. It is these accomplishments, represented in 38 papers and 52 posters, that comprise this set of conference proceedings. These proceedings can stand alone as

  7. Preface: SciDAC 2009

    NASA Astrophysics Data System (ADS)

    Simon, Horst

    2009-07-01

    By almost any measure, the SciDAC community has come a long way since DOE launched the SciDAC program back in 2001. At the time, we were grappling with how to efficiently run applications on terascale systems (the November 2001 TOP500 list was led by DOE's ASCI White IBM system at Lawrence Livermore achieving 7.2 teraflop/s). And the results stemming from the first round of SciDAC projects were summed up in two-page reports. The scientific results were presented at annual meetings, which were by invitation only and typically were attended by about 75 researchers. Fast forward to 2009 and we now have SciDAC Review, a quarterly magazine showcasing the scientific computing contributions of SciDAC projects and related programs, all focused on presenting a comprehensive look at Scientific Discovery through Advanced Computing. That is also the motivation behind the annual SciDAC conference that in 2009 was held from June 14-18 in San Diego. The annual conference, which can also be described as a celebration of all things SciDAC, grew out those meetings organized in the early days of the program. In 2005, the meeting was held in San Francisco and attendance was opened up to all members of the SciDAC community. The schedule was also expanded to include a keynote address, plenary speakers and other features found in a conference format. This year marks the fifth such SciDAC conference, which now comprises four days of computational science presentations, multiple poster sessions and, since last year, an evening event showcasing simulations and modeling runs resulting from SciDAC projects. The fifth annual SciDAC conference was remarkable on several levels. The primary purpose, of course, is to showcase the research accomplishments resulting from SciDAC programs in particular and computational science in general. It is these accomplishments, represented in 38 papers and 52 posters, that comprise this set of conference proceedings. These proceedings can stand alone as

  8. Preface: SciDAC 2008

    NASA Astrophysics Data System (ADS)

    Stevens, Rick

    2008-07-01

    The fourth annual Scientific Discovery through Advanced Computing (SciDAC) Conference was held June 13-18, 2008, in Seattle, Washington. The SciDAC conference series is the premier communitywide venue for presentation of results from the DOE Office of Science's interdisciplinary computational science program. Started in 2001 and renewed in 2006, the DOE SciDAC program is the country's - and arguably the world's - most significant interdisciplinary research program supporting the development of advanced scientific computing methods and their application to fundamental and applied areas of science. SciDAC supports computational science across many disciplines, including astrophysics, biology, chemistry, fusion sciences, and nuclear physics. Moreover, the program actively encourages the creation of long-term partnerships among scientists focused on challenging problems and computer scientists and applied mathematicians developing the technology and tools needed to address those problems. The SciDAC program has played an increasingly important role in scientific research by allowing scientists to create more accurate models of complex processes, simulate problems once thought to be impossible, and analyze the growing amount of data generated by experiments. To help further the research community's ability to tap into the capabilities of current and future supercomputers, Under Secretary for Science, Raymond Orbach, launched the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program in 2003. The INCITE program was conceived specifically to seek out computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. The program encourages proposals from universities, other research institutions, and industry. During the first two years of the INCITE program, 10 percent of the resources at NERSC were allocated to INCITE awardees. However, demand for supercomputing resources

  9. Preface: SciDAC 2006

    NASA Astrophysics Data System (ADS)

    Tang, William M., Dr.

    2006-01-01

    The second annual Scientific Discovery through Advanced Computing (SciDAC) Conference was held from June 25-29, 2006 at the new Hyatt Regency Hotel in Denver, Colorado. This conference showcased outstanding SciDAC-sponsored computational science results achieved during the past year across many scientific domains, with an emphasis on science at scale. Exciting computational science that has been accomplished outside of the SciDAC program both nationally and internationally was also featured to help foster communication between SciDAC computational scientists and those funded by other agencies. This was illustrated by many compelling examples of how domain scientists collaborated productively with applied mathematicians and computer scientists to effectively take advantage of terascale computers (capable of performing trillions of calculations per second) not only to accelerate progress in scientific discovery in a variety of fields but also to show great promise for being able to utilize the exciting petascale capabilities in the near future. The SciDAC program was originally conceived as an interdisciplinary computational science program based on the guiding principle that strong collaborative alliances between domain scientists, applied mathematicians, and computer scientists are vital to accelerated progress and associated discovery on the world's most challenging scientific problems. Associated verification and validation are essential in this successful program, which was funded by the US Department of Energy Office of Science (DOE OS) five years ago. As is made clear in many of the papers in these proceedings, SciDAC has fundamentally changed the way that computational science is now carried out in response to the exciting challenge of making the best use of the rapid progress in the emergence of more and more powerful computational platforms. In this regard, Dr. Raymond Orbach, Energy Undersecretary for Science at the DOE and Director of the OS has stated

  10. SciDAC Advances in Beam Dynamics Simulation: From Light Sources to Colliders

    SciTech Connect

    Qiang, J.; Borland, M.; Kabel, A.; Li, R.; Ryne, R.; Stern, E.; Wang, Y.; Wasserman, H.; Zhang, Y.; /SLAC

    2011-11-14

    In this paper, we report on progress that has been made in beam dynamics simulation, from light sources to colliders, during the first year of the SciDAC-2 accelerator project 'Community Petascale Project for Accelerator Science and Simulation (ComPASS).' Several parallel computational tools for beam dynamics simulation are described. Also presented are number of applications in current and future accelerator facilities (e.g., LCLS, RHIC, Tevatron, LHC, and ELIC). Particle accelerators are some of most important tools of scientific discovery. They are widely used in high-energy physics, nuclear physics, and other basic and applied sciences to study the interaction of elementary particles, to probe the internal structure of matter, and to generate high-brightness radiation for research in materials science, chemistry, biology, and other fields. Modern accelerators are complex and expensive devices that may be several kilometers long and may consist of thousands of beamline elements. An accelerator may transport trillions of charged particles that interact electromagnetically among themselves, that interact with fields produced by the accelerator components, and that interact with beam-induced fields. Large-scale beam dynamics simulations on massively parallel computers can help provide understanding of these complex physical phenomena, help minimize design cost, and help optimize machine operation. In this paper, we report on beam dynamics simulations in a variety of accelerators ranging from next generation light sources to high-energy ring colliders that have been studied during the first year of the SciDAC-2 accelerator project.

  11. Preface: SciDAC 2007

    NASA Astrophysics Data System (ADS)

    Keyes, David E.

    2007-09-01

    It takes a village to perform a petascale computation—domain scientists, applied mathematicians, computer scientists, computer system vendors, program managers, and support staff—and the village was assembled during 24-28 June 2007 in Boston's Westin Copley Place for the third annual Scientific Discovery through Advanced Computing (SciDAC) 2007 Conference. Over 300 registered participants networked around 76 posters, focused on achievements and challenges in 36 plenary talks, and brainstormed in two panels. In addition, with an eye to spreading the vision for simulation at the petascale and to growing the workforce, 115 participants—mostly doctoral students and post-docs complementary to the conferees—were gathered on 29 June 2007 in classrooms of the Massachusetts Institute of Technology for a full day of tutorials on the use of SciDAC software. Eleven SciDAC-sponsored research groups presented their software at an introductory level, in both lecture and hands-on formats that included live runs on a local BlueGene/L. Computation has always been about garnering insight into the behavior of systems too complex to explore satisfactorily by theoretical means alone. Today, however, computation is about much more: scientists and decision makers expect quantitatively reliable predictions from simulations ranging in scale from that of the Earth's climate, down to quarks, and out to colliding black holes. Predictive simulation lies at the heart of policy choices in energy and environment affecting billions of lives and expenditures of trillions of dollars. It is also at the heart of scientific debates on the nature of matter and the origin of the universe. The petascale is barely adequate for such demands and we are barely established at the levels of resolution and throughput that this new scale of computation affords. However, no scientific agenda worldwide is pushing the petascale frontier on all its fronts as vigorously as SciDAC. The breadth of this conference

  12. Preface: SciDAC 2005

    NASA Astrophysics Data System (ADS)

    Mezzacappa, Anthony

    2005-01-01

    On 26-30 June 2005 at the Grand Hyatt on Union Square in San Francisco several hundred computational scientists from around the world came together for what can certainly be described as a celebration of computational science. Scientists from the SciDAC Program and scientists from other agencies and nations were joined by applied mathematicians and computer scientists to highlight the many successes in the past year where computation has led to scientific discovery in a variety of fields: lattice quantum chromodynamics, accelerator modeling, chemistry, biology, materials science, Earth and climate science, astrophysics, and combustion and fusion energy science. Also highlighted were the advances in numerical methods and computer science, and the multidisciplinary collaboration cutting across science, mathematics, and computer science that enabled these discoveries. The SciDAC Program was conceived and funded by the US Department of Energy Office of Science. It is the Office of Science's premier computational science program founded on what is arguably the perfect formula: the priority and focus is science and scientific discovery, with the understanding that the full arsenal of `enabling technologies' in applied mathematics and computer science must be brought to bear if we are to have any hope of attacking and ultimately solving today's computational Grand Challenge problems. The SciDAC Program has been in existence for four years, and many of the computational scientists funded by this program will tell you that the program has given them the hope of addressing their scientific problems in full realism for the very first time. Many of these scientists will also tell you that SciDAC has also fundamentally changed the way they do computational science. We begin this volume with one of DOE's great traditions, and core missions: energy research. As we will see, computation has been seminal to the critical advances that have been made in this arena. Of course, to

  13. Stellar Evolution/Supernova Research Data Archives from the SciDAC Computational Astrophysics Consortium

    DOE Data Explorer

    Woosley, Stan [University of California, Santa Cruz

    Theoretical high-energy astrophysics studies the most violent explosions in the universe - supernovae (the massive explosions of dying stars) and gamma ray bursts (mysterious blasts of intense radiation). The evolution of massive stars and their explosion as supernovae and/or gamma ray bursts describes how the "heavy" elements needed for life, such as oxygen and iron, are forged (nucleosynthesis) and ejected to later form new stars and planets. The Computational Astrophysics Consortium's project includes a Science Application Partnership on Adaptive Algorithms that develops software involved. The principal science topics are - in order of priority - 1) models for Type Ia supernovae, 2) radiation transport, spectrum formation, and nucleosynthesis in model supernovae of all types; 3) the observational implications of these results for experiments in which DOE has an interest, especially the Joint Dark Energy Mission, Supernova/Acceleration Probe (SNAP) satellite observatory, the Large Synoptic Survey Telescope (LSST), and ground based supernova searches; 4) core collapse supernovae; 5) gamma-ray bursts; 6) hypernovae from Population III stars; and 7) x-ray bursts. Models of these phenomena share a common need for nuclear reactions and radiation transport coupled to multi-dimensional fluid flow. The team has developed and used supernovae simulation codes to study Type 1A and core-collapse supernovae. (Taken from http://www.scidac.gov/physics/grb.html) The Stellar Evolution Data Archives contains more than 225 Pre-SN models that can be freely accessed.

  14. Preface: SciDAC 2008

    NASA Astrophysics Data System (ADS)

    Stevens, Rick

    2008-07-01

    The fourth annual Scientific Discovery through Advanced Computing (SciDAC) Conference was held June 13-18, 2008, in Seattle, Washington. The SciDAC conference series is the premier communitywide venue for presentation of results from the DOE Office of Science's interdisciplinary computational science program. Started in 2001 and renewed in 2006, the DOE SciDAC program is the country's - and arguably the world's - most significant interdisciplinary research program supporting the development of advanced scientific computing methods and their application to fundamental and applied areas of science. SciDAC supports computational science across many disciplines, including astrophysics, biology, chemistry, fusion sciences, and nuclear physics. Moreover, the program actively encourages the creation of long-term partnerships among scientists focused on challenging problems and computer scientists and applied mathematicians developing the technology and tools needed to address those problems. The SciDAC program has played an increasingly important role in scientific research by allowing scientists to create more accurate models of complex processes, simulate problems once thought to be impossible, and analyze the growing amount of data generated by experiments. To help further the research community's ability to tap into the capabilities of current and future supercomputers, Under Secretary for Science, Raymond Orbach, launched the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program in 2003. The INCITE program was conceived specifically to seek out computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. The program encourages proposals from universities, other research institutions, and industry. During the first two years of the INCITE program, 10 percent of the resources at NERSC were allocated to INCITE awardees. However, demand for supercomputing resources

  15. Advanced computing

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Advanced concepts in hardware, software and algorithms are being pursued for application in next generation space computers and for ground based analysis of space data. The research program focuses on massively parallel computation and neural networks, as well as optical processing and optical networking which are discussed under photonics. Also included are theoretical programs in neural and nonlinear science, and device development for magnetic and ferroelectric memories.

  16. Opening Remarks: SciDAC 2007

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2007-09-01

    Good morning. Welcome to Boston, the home of the Red Sox, Celtics and Bruins, baked beans, tea parties, Robert Parker, and SciDAC 2007. A year ago I stood before you to share the legacy of the first SciDAC program and identify the challenges that we must address on the road to petascale computing—a road E E Cummins described as `. . . never traveled, gladly beyond any experience.' Today, I want to explore the preparations for the rapidly approaching extreme scale (X-scale) generation. These preparations are the first step propelling us along the road of burgeoning scientific discovery enabled by the application of X- scale computing. We look to petascale computing and beyond to open up a world of discovery that cuts across scientific fields and leads us to a greater understanding of not only our world, but our universe. As part of the President's America Competitiveness Initiative, the ASCR Office has been preparing a ten year vision for computing. As part of this planning the LBNL together with ORNL and ANL hosted three town hall meetings on Simulation and Modeling at the Exascale for Energy, Ecological Sustainability and Global Security (E3). The proposed E3 initiative is organized around four programmatic themes: Engaging our top scientists, engineers, computer scientists and applied mathematicians; investing in pioneering large-scale science; developing scalable analysis algorithms, and storage architectures to accelerate discovery; and accelerating the build-out and future development of the DOE open computing facilities. It is clear that we have only just started down the path to extreme scale computing. Plan to attend Thursday's session on the out-briefing and discussion of these meetings. The road to the petascale has been at best rocky. In FY07, the continuing resolution provided 12% less money for Advanced Scientific Computing than either the President, the Senate, or the House. As a consequence, many of you had to absorb a no cost extension for your

  17. UNEDF: Advanced Scientific Computing Transforms the Low-Energy Nuclear Many-Body Problem

    SciTech Connect

    Stoitsov, Mario; Nam, Hai Ah; Nazarewicz, Witold; Bulgac, Aurel; Hagen, Gaute; Kortelainen, E. M.; Pei, Junchen; Roche, K. J.; Schunck, N.; Thompson, I.; Vary, J. P.; Wild, S.

    2011-01-01

    The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper illustrates significant milestones accomplished by UNEDF through integration of the theoretical approaches, advanced numerical algorithms, and leadership class computational resources.

  18. Opening Comments: SciDAC 2008

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2008-07-01

    for and win INCITE program allocation awards. If you have a successful SciDAC proposal, we believe it will make you successful in the INCITE review. We have the expectation that you will among those most prepared and most ready to use the machines and to compute at scale. Over the past 18 months, we have assembled a team to look across our computational science portfolio and to judge what are the 10 most significant science accomplishments. The ASCR office, as it goes forward with OMB, the new administration, and Congress, will be judged by the science we have accomplished. All of our proposals - such as for increasing SciDAC, increasing applied mathematics, and so on - are tied to what have we accomplished in science. And so these 10 big accomplishments are key to establishing credibility for new budget requests. Tony Mezzacappa, who chaired the committee, will also give a presentation on the ranking of these top 10, how they got there, and what the science is all about. Here is the list - numbers 2, 5, 6, 7, 9, and 10 are all SciDAC projects. RankTitle 1Modeling the Molecular Basis of Parkinson's Disease (Tsigelny) 2Discovery of the Standing Accretion Shock Instability and Pulsar Birth Mechanism in a Core-Collapse Supernova Evolution and Explosion (Blondin) 3Prediction and Design of Macromolecular Structures and Functions (Baker) 4Understanding How Lifted Flame Stabilized in a Hot Coflow (Yoo) 5New Insights from LCF-enabled Advanced Kinetic Simulations of Global Turbulence in Fusion Systems (Tang) 6High Transition Temperature Superconductivity: A High-Temperature Superconductive State and a Pairing Mechanism in 2-D Hubbard Model (Scalapino) 7 PETsc: Providing the Solvers for DOE High-Performance Simulations (Smith) 8 Via Lactea II, A Billion Particle Simulation of the Dark Matter Halo of the Milky Way (Madau) 9Probing the Properties of Water through Advanced Computing (Galli) 10First Provably Scalable Maxwell Solver Enables Scalable Electromagnetic Simulations

  19. OPENING REMARKS: Scientific Discovery through Advanced Computing

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2006-01-01

    Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

  20. SciDAC advances in beam dynamics simulation: from light sources to colliders

    SciTech Connect

    Qiang, Ji; Qiang, J.; Borland, M.; Kabel, A.; Li, R.; Ryne, R.; Stern, E.; Wang, Y.; Wasserman, H.; Zhang, Y.

    2008-06-16

    In this paper, we report on progress that has been made in beam dynamics simulation, from light sources to colliders, during the first year of SciDAC-II accelerator project,"Community Petascale Project for Accelerator Science and Simulation (ComPASS)." Several parallel computational tools for beam dynamics simulation will be described. A number of applications in current and future accelerator facilities, e.g., LCLS, RHIC, Tevatron, LHC, ELIC, are presented.

  1. SciDAC Institute for Ultrascale Visualization

    SciTech Connect

    Humphreys, Grigori R.

    2008-09-30

    The Institute for Ultrascale Visualization aims to address visualization needs of SciDAC science domains, including research topics in advanced scientific visualization architectures, algorithms, and interfaces for understanding large, complex datasets. During the current project period, the focus of the team at the University of Virginia has been interactive remote rendering for scientific visualization. With high-performance computing resources enabling increasingly complex simulations, scientists may desire to interactively visualize huge 3D datasets. Traditional large-scale 3D visualization systems are often located very close to the processing clusters, and are linked to them with specialized connections for high-speed rendering. However, this tight coupling of processing and display limits possibilities for remote collaboration, and prohibits scientists from using their desktop workstations for data exploration. In this project, we are developing a client/server system for interactive remote 3D visualization on desktop computers.

  2. Opening Remarks: SciDAC 2007

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2007-09-01

    Good morning. Welcome to Boston, the home of the Red Sox, Celtics and Bruins, baked beans, tea parties, Robert Parker, and SciDAC 2007. A year ago I stood before you to share the legacy of the first SciDAC program and identify the challenges that we must address on the road to petascale computing—a road E E Cummins described as `. . . never traveled, gladly beyond any experience.' Today, I want to explore the preparations for the rapidly approaching extreme scale (X-scale) generation. These preparations are the first step propelling us along the road of burgeoning scientific discovery enabled by the application of X- scale computing. We look to petascale computing and beyond to open up a world of discovery that cuts across scientific fields and leads us to a greater understanding of not only our world, but our universe. As part of the President's America Competitiveness Initiative, the ASCR Office has been preparing a ten year vision for computing. As part of this planning the LBNL together with ORNL and ANL hosted three town hall meetings on Simulation and Modeling at the Exascale for Energy, Ecological Sustainability and Global Security (E3). The proposed E3 initiative is organized around four programmatic themes: Engaging our top scientists, engineers, computer scientists and applied mathematicians; investing in pioneering large-scale science; developing scalable analysis algorithms, and storage architectures to accelerate discovery; and accelerating the build-out and future development of the DOE open computing facilities. It is clear that we have only just started down the path to extreme scale computing. Plan to attend Thursday's session on the out-briefing and discussion of these meetings. The road to the petascale has been at best rocky. In FY07, the continuing resolution provided 12% less money for Advanced Scientific Computing than either the President, the Senate, or the House. As a consequence, many of you had to absorb a no cost extension for your

  3. SciDAC Fusiongrid Project--A National Collaboratory to Advance the Science of High Temperature Plasma Physics for Magnetic Fusion

    SciTech Connect

    SCHISSEL, D.P.; ABLA, G.; BURRUSS, J.R.; FEIBUSH, E.; FREDIAN, T.W.; GOODE, M.M.; GREENWALD, M.J.; KEAHEY, K.; LEGGETT, T.; LI, K.; McCUNE, D.C.; PAPKA, M.E.; RANDERSON, L.; SANDERSON, A.; STILLERMAN, J.; THOMPSON, M.R.; URAM, T.; WALLACE, G.

    2006-08-31

    This report summarizes the work of the National Fusion Collaboratory (NFC) Project funded by the United States Department of Energy (DOE) under the Scientific Discovery through Advanced Computing Program (SciDAC) to develop a persistent infrastructure to enable scientific collaboration for magnetic fusion research. A five year project that was initiated in 2001, it built on the past collaborative work performed within the U.S. fusion community and added the component of computer science research done with the USDOE Office of Science, Office of Advanced Scientific Computer Research. The project was a collaboration itself uniting fusion scientists from General Atomics, MIT, and PPPL and computer scientists from ANL, LBNL, Princeton University, and the University of Utah to form a coordinated team. The group leveraged existing computer science technology where possible and extended or created new capabilities where required. Developing a reliable energy system that is economically and environmentally sustainable is the long-term goal of Fusion Energy Science (FES) research. In the U.S., FES experimental research is centered at three large facilities with a replacement value of over $1B. As these experiments have increased in size and complexity, there has been a concurrent growth in the number and importance of collaborations among large groups at the experimental sites and smaller groups located nationwide. Teaming with the experimental community is a theoretical and simulation community whose efforts range from applied analysis of experimental data to fundamental theory (e.g., realistic nonlinear 3D plasma models) that run on massively parallel computers. Looking toward the future, the large-scale experiments needed for FES research are staffed by correspondingly large, globally dispersed teams. The fusion program will be increasingly oriented toward the International Thermonuclear Experimental Reactor (ITER) where even now, a decade before operation begins, a large

  4. National Computational Infrastructure for Lattice Gauge Theory SciDAC-2 Closeout Report

    SciTech Connect

    Sun, Xian-He

    2013-08-01

    As part of this project work, researchers from Vanderbilt University, Fermi National Laboratory and Illinois Institute of technology developed a real-time cluster fault-tolerant cluster monitoring framework. This framework is open source and is available for download upon request. This work has also been used at Fermi Laboratory, Vanderbilt University and Mississippi State University across projects other than LQCD. The goal for the scientific workflow project is to investigate and develop domain-specific workflow tools for LQCD to help effectively orchestrate, in parallel, computational campaigns consisting of many loosely-coupled batch processing jobs. Major requirements for an LQCD workflow system include: a system to manage input metadata, e.g. physics parameters such as masses, a system to manage and permit the reuse of templates describing workflows, a system to capture data provenance information, a systems to manage produced data, a means of monitoring workflow progress and status, a means of resuming or extending a stopped workflow, fault tolerance features to enhance the reliability of running workflows. Requirements for an LQCD workflow system are available in documentation.

  5. National Computational Infrastructure for LatticeGauge Theory SciDAC-2 Closeout Report

    SciTech Connect

    Bapty, Theodore; Dubey, Abhishek

    2013-07-18

    As part of the reliability project work, researchers from Vanderbilt University, Fermi National Laboratory and Illinois Institute of technology developed a real-time cluster fault-tolerant cluster monitoring framework. The goal for the scientific workflow project is to investigate and develop domain-specific workflow tools for LQCD to help effectively orchestrate, in parallel, computational campaigns consisting of many loosely-coupled batch processing jobs. Major requirements for an LQCD workflow system include: a system to manage input metadata, e.g. physics parameters such as masses, a system to manage and permit the reuse of templates describing workflows, a system to capture data provenance information, a systems to manage produced data, a means of monitoring workflow progress and status, a means of resuming or extending a stopped workflow, fault tolerance features to enhance the reliability of running workflows. In summary, these achievements are reported: • Implemented a software system to manage parameters. This includes a parameter set language based on a superset of the JSON data-interchange format, parsers in multiple languages (C++, Python, Ruby), and a web-based interface tool. It also includes a templating system that can produce input text for LQCD applications like MILC. • Implemented a monitoring sensor framework in software that is in production on the Fermilab USQCD facility. This includes equipment health, process accounting, MPI/QMP process tracking, and batch system (Torque) job monitoring. All sensor data are available from databases, and various query tools can be used to extract common data patterns and perform ad hoc searches. Common batch system queries such as job status are available in command line tools and are used in actual workflow-based production by a subset of Fermilab users. • Developed a formal state machine model for scientific workflow and reliability systems. This includes the use of Vanderbilt’s Generic Modeling

  6. Opening Comments: SciDAC 2009

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2009-07-01

    Oliver died last year. Ed did a number of things in science. He had this personality where he knew exactly what to do, but he preferred to stay out of the limelight so that others could enjoy the fruits of his vision. We in the SciDAC program and ASCR Facilities are still enjoying the benefits of his vision. We will miss him. Twenty years after Ken Wilson, Ray Orbach laid out the fundamental premise for SciDAC in an interview that appeared in SciDAC Review: 'SciDAC is unique in the world. There isn't any other program like it anywhere else, and it has the remarkable ability to do science by bringing together physical scientists, mathematicians, applied mathematicians, and computer scientists who recognize that computation is not something you do at the end, but rather it needs to be built into the solution of the very problem that one is addressing. ' As you look at the Lax report from 1982, it talks about how 'Future significant improvements may have to come from architectures embodying parallel processing elements—perhaps several thousands of processors.' And it continues, 'esearch in languages, algorithms and numerical analysis will be crucial in learning to exploit these new architectures fully.' In the early '90s, Sterling, Messina and Smith developed a workshop report on petascale computing and concluded, 'A petaflops computer system will be feasible in two decades, or less, and rely in part on the continual advancement of the semiconductor industry both in speed enhancement and cost reduction through improved fabrication processes.' So they were not wrong, and today we are embarking on a forward look that is at a different scale, the exascale, going to 1018 flops. In 2007, Stevens, Simon and Zacharia chaired a series of town hall meetings looking at exascale computing, and in their report wrote, 'Exascale computer systems are expected to be technologically feasible within the next 15 years, or perhaps sooner. These systems will push the envelope in a number of

  7. Opening Comments: SciDAC 2008

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2008-07-01

    Welcome to Seattle and the 2008 SciDAC Conference. This conference, the fourth in the series, is a continuation of the PI meetings we first began under SciDAC-1. I would like to start by thanking the organizing committee, and Rick Stevens in particular, for organizing this year's meeting. This morning I would like to look briefly at SciDAC, to give you a brief history of SciDAC and also look ahead to see where we plan to go over the next few years. I think the best description of SciDAC, at least the simulation part, comes from a quote from Dr Ray Orbach, DOE's Under Secretary for Science and Director of the Office of Science. In an interview that appeared in the SciDAC Review magazine, Dr Orbach said, `SciDAC is unique in the world. There isn't any other program like it anywhere else, and it has the remarkable ability to do science by bringing together physical scientists, mathematicians, applied mathematicians, and computer scientists who recognize that computation is not something you do at the end, but rather it needs to be built into the solution of the very problem that one is addressing'. Of course, that is extended not just to physical scientists, but also to biological scientists. This is a theme of computational science, this partnership among disciplines, which goes all the way back to the early 1980s and Ken Wilson. It's a unique thread within the Department of Energy. SciDAC-1, launched around the turn of the millennium, created a new generation of scientific simulation codes. It advocated building out mathematical and computing system software in support of science and a new collaboratory software environment for data. The original concept for SciDAC-1 had topical centers for the execution of the various science codes, but several corrections and adjustments were needed. The ASCR scientific computing infrastructure was also upgraded, providing the hardware facilities for the program. The computing facility that we had at that time was the big 3

  8. SciDAC Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas

    SciTech Connect

    Lin, Zhihong

    2013-12-18

    During the first year of the SciDAC gyrokinetic particle simulation (GPS) project, the GPS team (Zhihong Lin, Liu Chen, Yasutaro Nishimura, and Igor Holod) at the University of California, Irvine (UCI) studied the tokamak electron transport driven by electron temperature gradient (ETG) turbulence, and by trapped electron mode (TEM) turbulence and ion temperature gradient (ITG) turbulence with kinetic electron effects, extended our studies of ITG turbulence spreading to core-edge coupling. We have developed and optimized an elliptic solver using finite element method (FEM), which enables the implementation of advanced kinetic electron models (split-weight scheme and hybrid model) in the SciDAC GPS production code GTC. The GTC code has been ported and optimized on both scalar and vector parallel computer architectures, and is being transformed into objected-oriented style to facilitate collaborative code development. During this period, the UCI team members presented 11 invited talks at major national and international conferences, published 22 papers in peer-reviewed journals and 10 papers in conference proceedings. The UCI hosted the annual SciDAC Workshop on Plasma Turbulence sponsored by the GPS Center, 2005-2007. The workshop was attended by about fifties US and foreign researchers and financially sponsored several gradual students from MIT, Princeton University, Germany, Switzerland, and Finland. A new SciDAC postdoc, Igor Holod, has arrived at UCI to initiate global particle simulation of magnetohydrodynamics turbulence driven by energetic particle modes. The PI, Z. Lin, has been promoted to the Associate Professor with tenure at UCI.

  9. Interfaces for Advanced Computing.

    ERIC Educational Resources Information Center

    Foley, James D.

    1987-01-01

    Discusses the coming generation of supercomputers that will have the power to make elaborate "artificial realities" that facilitate user-computer communication. Illustrates these technological advancements with examples of the use of head-mounted monitors which are connected to position and orientation sensors, and gloves that track finger and…

  10. UNEDF: Advanced Scientific Computing Collaboration Transforms the Low-Energy Nuclear Many-Body Problem

    NASA Astrophysics Data System (ADS)

    Nam, H.; Stoitsov, M.; Nazarewicz, W.; Bulgac, A.; Hagen, G.; Kortelainen, M.; Maris, P.; Pei, J. C.; Roche, K. J.; Schunck, N.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2012-12-01

    The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multi-disciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. We illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadership-class computational resources.

  11. UNEDF: Advanced Scientific Computing Collaboration Transforms the Low-Energy Nuclear Many-Body Problem

    SciTech Connect

    Nam, H.; Stoitsov, M.; Nazarewicz, W.; Bulgac, A.; Hagen, G.; Kortelainen, M.; Maris, P.; Pei, J. C.; Roche, K. J.; Schunck, N.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2012-12-20

    The demands of cutting-edge science are driving the need for larger and faster computing resources. With the rapidly growing scale of computing systems and the prospect of technologically disruptive architectures to meet these needs, scientists face the challenge of effectively using complex computational resources to advance scientific discovery. Multi-disciplinary collaborating networks of researchers with diverse scientific backgrounds are needed to address these complex challenges. The UNEDF SciDAC collaboration of nuclear theorists, applied mathematicians, and computer scientists is developing a comprehensive description of nuclei and their reactions that delivers maximum predictive power with quantified uncertainties. This paper describes UNEDF and identifies attributes that classify it as a successful computational collaboration. Finally, we illustrate significant milestones accomplished by UNEDF through integrative solutions using the most reliable theoretical approaches, most advanced algorithms, and leadership-class computational resources.

  12. Advanced Computing for Medicine.

    ERIC Educational Resources Information Center

    Rennels, Glenn D.; Shortliffe, Edward H.

    1987-01-01

    Discusses contributions that computers and computer networks are making to the field of medicine. Emphasizes the computer's speed in storing and retrieving data. Suggests that doctors may soon be able to use computers to advise on diagnosis and treatment. (TW)

  13. Building a Universal Nuclear Energy Density Functional (UNEDF). SciDAC-2 Project

    SciTech Connect

    Vary, James P.; Carlson, Joe; Furnstahl, Dick; Horoi, Mihai; Lusk, Rusty; Nazarewicz, Witek; Ng, Esmond; Thompson, Ian

    2012-09-29

    An understanding of the properties of atomic nuclei is crucial for a complete nuclear theory, for element formation, for properties of stars, and for present and future energy and defense applications. During the period of Dec. 1 2006 – Jun. 30, 2012, the UNEDF collaboration carried out a comprehensive study of all nuclei, based on the most accurate knowledge of the strong nuclear interaction, the most reliable theoretical approaches, the most advanced algorithms, and extensive computational resources, with a view towards scaling to the petaflop platforms and beyond. Until recently such an undertaking was hard to imagine, and even at the present time such an ambitious endeavor would be far beyond what a single researcher or a traditional research group could carry out. The UNEDF SciDAC project has developed several key computational codes and algorithms for reaching the goal of solving the nuclear quantum many-body problem throughout the chart of nuclei. Without such developments, scientific progress would not be possible. In addition the UNEDF SciDAC successfully applied these developments to solve many forefront research problems.

  14. Opening Comments: SciDAC 2009

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2009-07-01

    Welcome to San Diego and the 2009 SciDAC conference. Over the next four days, I would like to present an assessment of the SciDAC program. We will look at where we've been, how we got to where we are and where we are going in the future. Our vision is to be first in computational science, to be best in class in modeling and simulation. When Ray Orbach asked me what I would do, in my job interview for the SciDAC Director position, I said we would achieve that vision. And with our collective dedicated efforts, we have managed to achieve this vision. In the last year, we have now the most powerful supercomputer for open science, Jaguar, the Cray XT system at the Oak Ridge Leadership Computing Facility (OLCF). We also have NERSC, probably the best-in-the-world program for productivity in science that the Office of Science so depends on. And the Argonne Leadership Computing Facility offers architectural diversity with its IBM Blue Gene/P system as a counterbalance to Oak Ridge. There is also ESnet, which is often understated—the 40 gigabit per second dual backbone ring that connects all the labs and many DOE sites. In the President's Recovery Act funding, there is exciting news that ESnet is going to build out to a 100 gigabit per second network using new optical technologies. This is very exciting news for simulations and large-scale scientific facilities. But as one noted SciDAC luminary said, it's not all about the computers—it's also about the science—and we are also achieving our vision in this area. Together with having the fastest supercomputer for science, at the SC08 conference, SciDAC researchers won two ACM Gordon Bell Prizes for the outstanding performance of their applications. The DCA++ code, which solves some very interesting problems in materials, achieved a sustained performance of 1.3 petaflops, an astounding result and a mark I suspect will last for some time. The LS3DF application for studying nanomaterials also required the development of a

  15. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  16. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  17. Advanced Computing for Science.

    ERIC Educational Resources Information Center

    Hut, Piet; Sussman, Gerald Jay

    1987-01-01

    Discusses some of the contributions that high-speed computing is making to the study of science. Emphasizes the use of computers in exploring complicated systems without the simplification required in traditional methods of observation and experimentation. Provides examples of computer assisted investigations in astronomy and physics. (TW)

  18. Advanced computer languages

    SciTech Connect

    Bryce, H.

    1984-05-03

    If software is to become an equal partner in the so-called fifth generation of computers-which of course it must-programming languages and the human interface will need to clear some high hurdles. Again, the solutions being sought turn to cerebral emulation-here, the way that human beings understand language. The result would be natural or English-like languages that would allow a person to communicate with a computer much as he or she does with another person. In the discussion the authors look at fourth level languages and fifth level languages, used in meeting the goal of AI. The higher level languages aim to be non procedural. Application of LISP, and Forth to natural language interface are described as well as programs such as natural link technology package, written in C.

  19. KEYNOTE: Simulation, computation, and the Global Nuclear Energy Partnership

    NASA Astrophysics Data System (ADS)

    Reis, Victor, Dr.

    2006-01-01

    Dr. Victor Reis delivered the keynote talk at the closing session of the conference. The talk was forward looking and focused on the importance of advanced computing for large-scale nuclear energy goals such as Global Nuclear Energy Partnership (GNEP). Dr. Reis discussed the important connections of GNEP to the Scientific Discovery through Advanced Computing (SciDAC) program and the SciDAC research portfolio. In the context of GNEP, Dr. Reis talked about possible fuel leasing configurations, strategies for their implementation, and typical fuel cycle flow sheets. A major portion of the talk addressed lessons learnt from ‘Science Based Stockpile Stewardship’ and the Accelerated Strategic Computing Initiative (ASCI) initiative and how they can provide guidance for advancing GNEP and SciDAC goals. Dr. Reis’s colorful and informative presentation included international proverbs, quotes and comments, in tune with the international flavor that is part of the GNEP philosophy and plan. He concluded with a positive and motivating outlook for peaceful nuclear energy and its potential to solve global problems. An interview with Dr. Reis, addressing some of the above issues, is the cover story of Issue 2 of the SciDAC Review and available at http://www.scidacreview.org This summary of Dr. Reis’s PowerPoint presentation was prepared by Institute of Physics Publishing, the complete PowerPoint version of Dr. Reis’s talk at SciDAC 2006 is given as a multimedia attachment to this summary.

  20. SciDAC-2 software infrastructure for lattice QCD.

    SciTech Connect

    Balint Joo

    2007-06-01

    We present work carried out by the USQCD Collaboration on Software Infrastructure funded under SciDAC 2. We present successes of the software from the original SciDAC 1 project as well as ongoing and future work. We outline the various scientific collaborations SciDAC-2 has created.

  1. Center for Advanced Computational Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  2. Recent advances in computational aerodynamics

    NASA Astrophysics Data System (ADS)

    Agarwal, Ramesh K.; Desse, Jerry E.

    1991-04-01

    The current state of the art in computational aerodynamics is described. Recent advances in the discretization of surface geometry, grid generation, and flow simulation algorithms have led to flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics is emerging as a crucial enabling technology for the development and design of flight vehicles. Examples illustrating the current capability for the prediction of aircraft, launch vehicle and helicopter flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.

  3. Advanced flight computer. Special study

    NASA Technical Reports Server (NTRS)

    Coo, Dennis

    1995-01-01

    This report documents a special study to define a 32-bit radiation hardened, SEU tolerant flight computer architecture, and to investigate current or near-term technologies and development efforts that contribute to the Advanced Flight Computer (AFC) design and development. An AFC processing node architecture is defined. Each node may consist of a multi-chip processor as needed. The modular, building block approach uses VLSI technology and packaging methods that demonstrate a feasible AFC module in 1998 that meets that AFC goals. The defined architecture and approach demonstrate a clear low-risk, low-cost path to the 1998 production goal, with intermediate prototypes in 1996.

  4. Computational Science Research in Support of Petascale Electromagnetic Modeling

    SciTech Connect

    Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC

    2008-06-20

    Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.

  5. Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report

    SciTech Connect

    Lucas, Robert

    2013-04-20

    Enhancing the performance of SciDAC applications on petascale systems had high priority within DOE SC at the start of the second phase of the SciDAC program, SciDAC-2, as it continues to do so today. Achieving expected levels of performance on high-end computing (HEC) systems is growing ever more challenging due to enormous scale, increasing architectural complexity, and increasing application complexity. To address these challenges, the University of Southern California?s Information Sciences Institute organized the Performance Engineering Research Institute (PERI). PERI implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineering of high profile applications. Within PERI, USC?s primary research activity was automatic tuning (autotuning) of scientific software. This activity was spurred by the strong user preference for automatic tools and was based on previous successful activities such as ATLAS, which automatically tuned components of the LAPACK linear algebra library, and other recent work on autotuning domain-specific libraries. Our other major component was application engagement, to which we devoted approximately 30% of our effort to work directly with SciDAC-2 applications. This report is a summary of the overall results of the USC PERI effort.

  6. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    SciTech Connect

    Spentzouris, P.; Cary, J.; McInnes, L.C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  7. Advances in computational solvation thermodynamics

    NASA Astrophysics Data System (ADS)

    Wyczalkowski, Matthew A.

    The aim of this thesis is to develop improved methods for calculating the free energy, entropy and enthalpy of solvation from molecular simulations. Solvation thermodynamics of model compounds provides quantitative measurements used to analyze the stability of protein conformations in aqueous milieus. Solvation free energies govern the favorability of the solvation process, while entropy and enthalpy decompositions give insight into the molecular mechanisms by which the process occurs. Computationally, a coupling parameter lambda modulates solute-solvent interactions to simulate an insertion process, and multiple lengthy simulations at a fixed lambda value are typically required for free energy calculations to converge; entropy and enthalpy decompositions generally take 10-100 times longer. This thesis presents three advances which accelerate the convergence of such calculations: (1) Development of entropy and enthalpy estimators which combine data from multiple simulations; (2) Optimization of lambda schedules, or the set of parameter values associated with each simulation; (3) Validation of Hamiltonian replica exchange, a technique which swaps lambda values between two otherwise independent simulations. Taken together, these techniques promise to increase the accuracy and precision of free energy, entropy and enthalpy calculations. Improved estimates, in turn, can be used to investigate the validity and limits of existing solvation models and refine force field parameters, with the goal of understanding better the collapse transition and aggregation behavior of polypeptides.

  8. Advancing manufacturing through computational chemistry

    SciTech Connect

    Noid, D.W.; Sumpter, B.G.; Tuzun, R.E.

    1995-12-31

    The capabilities of nanotechnology and computational chemistry are reaching a point of convergence. New computer hardware and novel computational methods have created opportunities to test proposed nanometer-scale devices, investigate molecular manufacturing and model and predict properties of new materials. Experimental methods are also beginning to provide new capabilities that make the possibility of manufacturing various devices with atomic precision tangible. In this paper, we will discuss some of the novel computational methods we have used in molecular dynamics simulations of polymer processes, neural network predictions of new materials, and simulations of proposed nano-bearings and fluid dynamics in nano- sized devices.

  9. Quantum chromodynamics with advanced computing

    SciTech Connect

    Kronfeld, Andreas S.; /Fermilab

    2008-07-01

    We survey results in lattice quantum chromodynamics from groups in the USQCD Collaboration. The main focus is on physics, but many aspects of the discussion are aimed at an audience of computational physicists.

  10. Aerodynamic Analyses Requiring Advanced Computers, Part 1

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers are presented which deal with results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include: viscous flows, boundary layer equations, turbulence modeling and Navier-Stokes equations, and internal flows.

  11. Aerodynamic Analyses Requiring Advanced Computers, part 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Papers given at the conference present the results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include two-dimensional configurations, three-dimensional configurations, transonic aircraft, and the space shuttle.

  12. Bringing Advanced Computational Techniques to Energy Research

    SciTech Connect

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  13. Advanced algorithm for orbit computation

    NASA Technical Reports Server (NTRS)

    Szenbehely, V.

    1983-01-01

    Computational and analytical techniques which simplify the solution of complex problems in orbit mechanics, Astrodynamics and Celestial Mechanics were developed. The major tool of the simplification is the substitution of transformations in place of numerical or analytical integrations. In this way the rather complicated equations of orbit mechanics might sometimes be reduced to linear equations representing harmonic oscillators with constant coefficients.

  14. 77 FR 62231 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-12

    .../Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of Open Meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing...: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building;...

  15. 76 FR 31945 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-02

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... the Advanced Scientific Computing Advisory Committee (ASCAC). The Federal Advisory Committee Act (Pub... INFORMATION CONTACT: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown...

  16. Advanced Biomedical Computing Center (ABCC) | DSITP

    Cancer.gov

    The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.

  17. Advances and trends in computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1986-01-01

    Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.

  18. Opportunities in computational mechanics: Advances in parallel computing

    SciTech Connect

    Lesar, R.A.

    1999-02-01

    In this paper, the authors will discuss recent advances in computing power and the prospects for using these new capabilities for studying plasticity and failure. They will first review the new capabilities made available with parallel computing. They will discuss how these machines perform and how well their architecture might work on materials issues. Finally, they will give some estimates on the size of problems possible using these computers.

  19. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  20. Advances and trends in computational structures technology

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Venneri, S. L.

    1990-01-01

    The major goals of computational structures technology (CST) are outlined, and recent advances in CST are examined. These include computational material modeling, stochastic-based modeling, computational methods for articulated structural dynamics, strategies and numerical algorithms for new computing systems, multidisciplinary analysis and optimization. The role of CST in the future development of structures technology and the multidisciplinary design of future flight vehicles is addressed, and the future directions of CST research in the prediction of failures of structural components, the solution of large-scale structural problems, and quality assessment and control of numerical simulations are discussed.

  1. America's most computer advanced healthcare facilities.

    PubMed

    1993-02-01

    Healthcare Informatics polled industry experts for nominations for this listing of America's Most Computer-Advanced Healthcare Facilities. Nominations were reviewed for extent of departmental automation, leading-edge applications, advanced point-of-care technologies, and networking communications capabilities. Additional consideration was given to smaller facilities automated beyond "normal expectations." Facility representatives who believe their organizations should be included in our next listing, please contact Healthcare Informatics for a nomination form.

  2. Advances and Challenges in Computational Plasma Science

    SciTech Connect

    W.M. Tang; V.S. Chan

    2005-01-03

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behavior. Recent advances in simulations of magnetically-confined plasmas are reviewed in this paper with illustrative examples chosen from associated research areas such as microturbulence, magnetohydrodynamics, and other topics. Progress has been stimulated in particular by the exponential growth of computer speed along with significant improvements in computer technology.

  3. The impact of SciDAC on US climate change research and the IPCC AR4

    NASA Astrophysics Data System (ADS)

    Wehner, Michael

    2005-01-01

    SciDAC has invested heavily in climate change research. We offer a candid opinion as to the impact of the DOE laboratories' SciDAC projects on the upcoming Fourth Assessment Report of the Intergovernmental Panel on Climate Change.

  4. Computational Biology, Advanced Scientific Computing, and Emerging Computational Architectures

    SciTech Connect

    2007-06-27

    This CRADA was established at the start of FY02 with $200 K from IBM and matching funds from DOE to support post-doctoral fellows in collaborative research between International Business Machines and Oak Ridge National Laboratory to explore effective use of emerging petascale computational architectures for the solution of computational biology problems. 'No cost' extensions of the CRADA were negotiated with IBM for FY03 and FY04.

  5. Advanced networks and computing in healthcare

    PubMed Central

    Ackerman, Michael

    2011-01-01

    As computing and network capabilities continue to rise, it becomes increasingly important to understand the varied applications for using them to provide healthcare. The objective of this review is to identify key characteristics and attributes of healthcare applications involving the use of advanced computing and communication technologies, drawing upon 45 research and development projects in telemedicine and other aspects of healthcare funded by the National Library of Medicine over the past 12 years. Only projects publishing in the professional literature were included in the review. Four projects did not publish beyond their final reports. In addition, the authors drew on their first-hand experience as project officers, reviewers and monitors of the work. Major themes in the corpus of work were identified, characterizing key attributes of advanced computing and network applications in healthcare. Advanced computing and network applications are relevant to a range of healthcare settings and specialties, but they are most appropriate for solving a narrower range of problems in each. Healthcare projects undertaken primarily to explore potential have also demonstrated effectiveness and depend on the quality of network service as much as bandwidth. Many applications are enabling, making it possible to provide service or conduct research that previously was not possible or to achieve outcomes in addition to those for which projects were undertaken. Most notable are advances in imaging and visualization, collaboration and sense of presence, and mobility in communication and information-resource use. PMID:21486877

  6. Predictive Dynamic Security Assessment through Advanced Computing

    SciTech Connect

    Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

    2014-11-30

    Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

  7. Recent advances in computer image generation simulation.

    PubMed

    Geltmacher, H E

    1988-11-01

    An explosion in flight simulator technology over the past 10 years is revolutionizing U.S. Air Force (USAF) operational training. The single, most important development has been in computer image generation. However, other significant advances are being made in simulator handling qualities, real-time computation systems, and electro-optical displays. These developments hold great promise for achieving high fidelity combat mission simulation. This article reviews the progress to date and predicts its impact, along with that of new computer science advances such as very high speed integrated circuits (VHSIC), on future USAF aircrew simulator training. Some exciting possibilities are multiship, full-mission simulators at replacement training units, miniaturized unit level mission rehearsal training simulators, onboard embedded training capability, and national scale simulator networking.

  8. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  9. Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  10. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  11. Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report

    SciTech Connect

    Hall, Mary

    2014-09-19

    Enhancing the performance of SciDAC applications on petascale systems has high priority within DOE SC. As we look to the future, achieving expected levels of performance on high-end com-puting (HEC) systems is growing ever more challenging due to enormous scale, increasing archi-tectural complexity, and increasing application complexity. To address these challenges, PERI has implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineering of high profile applications. The PERI performance modeling and prediction activity is developing and refining performance models, significantly reducing the cost of collecting the data upon which the models are based, and increasing model fidelity, speed and generality. Our primary research activity is automatic tuning (autotuning) of scientific software. This activity is spurred by the strong user preference for automatic tools and is based on previous successful activities such as ATLAS, which has automatically tuned components of the LAPACK linear algebra library, and other re-cent work on autotuning domain-specific libraries. Our third major component is application en-gagement, to which we are devoting approximately 30% of our effort to work directly with Sci-DAC-2 applications. This last activity not only helps DOE scientists meet their near-term per-formance goals, but also helps keep PERI research focused on the real challenges facing DOE computational scientists as they enter the Petascale Era.

  12. 78 FR 6087 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-29

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  13. 75 FR 9887 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  14. 76 FR 9765 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-22

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research, SC-21/Germantown Building, U.S. Department of...

  15. 78 FR 41046 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... hereby given that the Advanced Scientific Computing Advisory Committee will be renewed for a two-year... (DOE), on the Advanced Scientific Computing Research Program managed by the Office of...

  16. 75 FR 64720 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-20

    .../Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department...

  17. 76 FR 41234 - Advanced Scientific Computing Advisory Committee Charter Renewal

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ... Advanced Scientific Computing Advisory Committee Charter Renewal AGENCY: Department of Energy, Office of... Administration, notice is hereby given that the Advanced Scientific Computing Advisory Committee will be renewed... concerning the Advanced Scientific Computing program in response only to charges from the Director of...

  18. 78 FR 56871 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-16

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  19. 77 FR 45345 - DOE/Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-31

    .../Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing... Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U.S. Department...

  20. 75 FR 43518 - Advanced Scientific Computing Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-26

    ... Advanced Scientific Computing Advisory Committee; Meeting AGENCY: Office of Science, DOE. ACTION: Notice of open meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing Advisory..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  1. 77 FR 12823 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION: Notice of Open Meeting. SUMMARY: This notice announces a meeting of the Advanced Scientific Computing..., Office of Advanced Scientific Computing Research, SC-21/Germantown Building, U.S. Department of...

  2. Computational Design of Advanced Nuclear Fuels

    SciTech Connect

    Savrasov, Sergey; Kotliar, Gabriel; Haule, Kristjan

    2014-06-03

    The objective of the project was to develop a method for theoretical understanding of nuclear fuel materials whose physical and thermophysical properties can be predicted from first principles using a novel dynamical mean field method for electronic structure calculations. We concentrated our study on uranium, plutonium, their oxides, nitrides, carbides, as well as some rare earth materials whose 4f eletrons provide a simplified framework for understanding complex behavior of the f electrons. We addressed the issues connected to the electronic structure, lattice instabilities, phonon and magnon dynamics as well as thermal conductivity. This allowed us to evaluate characteristics of advanced nuclear fuel systems using computer based simulations and avoid costly experiments.

  3. Advances in computed tomography imaging technology.

    PubMed

    Ginat, Daniel Thomas; Gupta, Rajiv

    2014-07-11

    Computed tomography (CT) is an essential tool in diagnostic imaging for evaluating many clinical conditions. In recent years, there have been several notable advances in CT technology that already have had or are expected to have a significant clinical impact, including extreme multidetector CT, iterative reconstruction algorithms, dual-energy CT, cone-beam CT, portable CT, and phase-contrast CT. These techniques and their clinical applications are reviewed and illustrated in this article. In addition, emerging technologies that address deficiencies in these modalities are discussed.

  4. ATCA for Machines-- Advanced Telecommunications Computing Architecture

    SciTech Connect

    Larsen, R.S.; /SLAC

    2008-04-22

    The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R&D including application of HA principles to power electronics systems.

  5. An introduction to NASA's advanced computing program: Integrated computing systems in advanced multichip modules

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Alkalai, Leon

    1996-01-01

    Recent changes within NASA's space exploration program favor the design, implementation, and operation of low cost, lightweight, small and micro spacecraft with multiple launches per year. In order to meet the future needs of these missions with regard to the use of spacecraft microelectronics, NASA's advanced flight computing (AFC) program is currently considering industrial cooperation and advanced packaging architectures. In relation to this, the AFC program is reviewed, considering the design and implementation of NASA's AFC multichip module.

  6. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  7. SciDAC - The Scientific Data Management Center (http://sdmcenter.lbl.gov)

    SciTech Connect

    Ling Liu Calton Pu

    2005-06-20

    In SciDAC SDM project, the main assignment to the Georgia Institute of Technology team (according to the proposed work) is to develop advanced information extraction and information integration technologies on top of the XWRAP technology originated from Georgia Tech [LPH01]. We have developed XWRAPComposer technology to enable the XWRAP code generator to generate Java information wrappers that are capable of extraction of data from multiple linked pages. These information wrappers are used as gateways or adaptors for scientific information mediators to access and fuse interesting data and answering complex queries over a large collection of heterogeneous scientific information sources. Our accomplishments over the SciDAC sponsored years (July 2001 to July 2004) can be summarized along two dimensions. Technically, we have produced a number of major software releases and published over 30 research papers in both international conferences and international journals. The planned software releases include 1. Five Java wrappers and five WDSL-enabled wrappers for SDM Pilot scenarios, which were released in early 2003, 2. The XWRAPComposer toolkit (command line version) which was first released in late 2003 and then released in Summer 2004, 3. Five Ptolemy wrapper actors which were released first in Summer 2003, and then released again in Fall 2005. 4. The decomposable XWRAPComposer actor in Ptolemy, which we have made it available as open source in end of 2004 and tested it in early 2005.

  8. Commnity Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    SciTech Connect

    Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2008-07-01

    The design and performance optimization of particle accelerators is essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC1 Accelerator Science and Technology project, the SciDAC2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modeling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multi-physics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  9. Community petascale project for accelerator science and simulation : Advancing computational science for future accelerators and accelerator technologies.

    SciTech Connect

    Spentzouris, P.; Cary, J.; McInnes, L. C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.

    2008-01-01

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R & D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  10. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    SciTech Connect

    Spentzouris, Panagiotis; Cary, John; Mcinnes, Lois Curfman; Mori, Warren; Ng, Cho; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2011-10-21

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  11. Advanced Scientific Computing Research Network Requirements

    SciTech Connect

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  12. VACET: Proposed SciDAC2 Visualization and Analytics Center forEnabling Technologies

    SciTech Connect

    Bethel, W.; Johnson, Chris; Hansen, Charles; Parker, Steve; Sanderson, Allen; Silva, Claudio; Tricoche, Xavier; Pascucci, Valerio; Childs, Hank; Cohen, Jonathon; Duchaineau, Mark; Laney, Dan; Lindstrom,Peter; Ahern, Sean; Meredith, Jeremy; Ostrouchov, George; Joy, Ken; Hamann, Bernd

    2006-06-19

    This paper accompanies a poster that is being presented atthe SciDAC 2006 meeting in Denver, CO. This project focuses on leveragingscientific visualization and analytics software technology as an enablingtechnology for increasing scientific productivity and insight. Advancesincomputational technology have resultedin an "information big bang,"which in turn has createda significant data understanding challenge. Thischallenge is widely acknowledged to be one of the primary bottlenecks incontemporary science. The vision for our Center is to respond directly tothat challenge by adapting, extending, creating when necessary anddeploying visualization and data understanding technologies for ourscience stakeholders. Using an organizational model as a Visualizationand Analytics Center for Enabling Technologies (VACET), we are wellpositioned to be responsive to the needs of a diverse set of scientificstakeholders in a coordinated fashion using a range of visualization,mathematics, statistics, computer and computational science and datamanagement technologies.

  13. 76 FR 45786 - Advanced Scientific Computing Advisory Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... Advanced Scientific Computing Advisory Committee; Meeting AGENCY: Office of Science, Department of Energy... Computing Advisory Committee (ASCAC). Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770) requires... INFORMATION CONTACT: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown...

  14. 75 FR 57742 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-22

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... Scientific Computing Advisory Committee (ASCAC). Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770...: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown Building;...

  15. 78 FR 64931 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ACTION... Computing Advisory Committee (ASCAC). This meeting replaces the cancelled ASCAC meeting that was to be held... Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of Energy;...

  16. 76 FR 64330 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-18

    ... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... Computing Advisory Committee (ASCAC). The Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat. 770..., Office of Advanced Scientific Computing Research; SC-21/Germantown Building; U. S. Department of...

  17. 78 FR 50404 - Advanced Scientific Computing Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... Advanced Scientific Computing Advisory Committee AGENCY: Office of Science, Department of Energy. ] ACTION... Scientific Computing Advisory Committee (ASCAC). The Federal Advisory Committee Act (Pub. L. 92-463, 86 Stat... INFORMATION CONTACT: Melea Baker, Office of Advanced Scientific Computing Research; SC-21/Germantown...

  18. Making Advanced Computer Science Topics More Accessible through Interactive Technologies

    ERIC Educational Resources Information Center

    Shao, Kun; Maher, Peter

    2012-01-01

    Purpose: Teaching advanced technical concepts in a computer science program to students of different technical backgrounds presents many challenges. The purpose of this paper is to present a detailed experimental pedagogy in teaching advanced computer science topics, such as computer networking, telecommunications and data structures using…

  19. Developing an Advanced Environment for Collaborative Computing

    NASA Technical Reports Server (NTRS)

    Becerra-Fernandez, Irma; Stewart, Helen; DelAlto, Martha; DelAlto, Martha; Knight, Chris

    1999-01-01

    Knowledge management in general tries to organize and make available important know-how, whenever and where ever is needed. Today, organizations rely on decision-makers to produce "mission critical" decisions that am based on inputs from multiple domains. The ideal decision-maker has a profound understanding of specific domains that influence the decision-making process coupled with the experience that allows them to act quickly and decisively on the information. In addition, learning companies benefit by not repeating costly mistakes, and by reducing time-to-market in Research & Development projects. Group-decision making tools can help companies make better decisions by capturing the knowledge from groups of experts. Furthermore, companies that capture their customers preferences can improve their customer service, which translates to larger profits. Therefore collaborative computing provides a common communication space, improves sharing of knowledge, provides a mechanism for real-time feedback on the tasks being performed, helps to optimize processes, and results in a centralized knowledge warehouse. This paper presents the research directions. of a project which seeks to augment an advanced collaborative web-based environment called Postdoc, with workflow capabilities. Postdoc is a "government-off-the-shelf" document management software developed at NASA-Ames Research Center (ARC).

  20. SciDAC's Earth System Grid Center for Enabling Technologies Semiannual Progress Report October 1, 2010 through March 31, 2011

    SciTech Connect

    Williams, Dean N.

    2011-04-02

    This report summarizes work carried out by the Earth System Grid Center for Enabling Technologies (ESG-CET) from October 1, 2010 through March 31, 2011. It discusses ESG-CET highlights for the reporting period, overall progress, period goals, and collaborations, and lists papers and presentations. To learn more about our project and to find previous reports, please visit the ESG-CET Web sites: http://esg-pcmdi.llnl.gov/ and/or https://wiki.ucar.edu/display/esgcet/Home. This report will be forwarded to managers in the Department of Energy (DOE) Scientific Discovery through Advanced Computing (SciDAC) program and the Office of Biological and Environmental Research (OBER), as well as national and international collaborators and stakeholders (e.g., those involved in the Coupled Model Intercomparison Project, phase 5 (CMIP5) for the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5); the Community Earth System Model (CESM); the Climate Science Computational End Station (CCES); SciDAC II: A Scalable and Extensible Earth System Model for Climate Change Science; the North American Regional Climate Change Assessment Program (NARCCAP); the Atmospheric Radiation Measurement (ARM) program; the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA)), and also to researchers working on a variety of other climate model and observation evaluation activities. The ESG-CET executive committee consists of Dean N. Williams, Lawrence Livermore National Laboratory (LLNL); Ian Foster, Argonne National Laboratory (ANL); and Don Middleton, National Center for Atmospheric Research (NCAR). The ESG-CET team is a group of researchers and scientists with diverse domain knowledge, whose home institutions include eight laboratories and two universities: ANL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), LLNL, NASA/Jet Propulsion Laboratory (JPL), NCAR, Oak Ridge National

  1. Advanced flight computers for planetary exploration

    NASA Technical Reports Server (NTRS)

    Stephenson, R. Rhoads

    1988-01-01

    Research concerning flight computers for use on interplanetary probes is reviewed. The history of these computers from the Viking mission to the present is outlined. The differences between ground commercial computers and computers for planetary exploration are listed. The development of a computer for the Mariner Mark II comet rendezvous asteroid flyby mission is described. Various aspects of recently developed computer systems are examined, including the Max real time, embedded computer, a hypercube distributed supercomputer, a SAR data processor, a processor for the High Resolution IR Imaging Spectrometer, and a robotic vision multiresolution pyramid machine for processsing images obtained by a Mars Rover.

  2. DOE's SciDAC Visualization and Analytics Center for EnablingTechnologies -- Strategy for Petascale Visual Data Analysis Success

    SciTech Connect

    Bethel, E Wes; Johnson, Chris; Aragon, Cecilia; Rubel, Oliver; Weber, Gunther; Pascucci, Valerio; Childs, Hank; Bremer, Peer-Timo; Whitlock, Brad; Ahern, Sean; Meredith, Jeremey; Ostrouchov, George; Joy, Ken; Hamann, Bernd; Garth, Christoph; Cole, Martin; Hansen, Charles; Parker, Steven; Sanderson, Allen; Silva, Claudio; Tricoche, Xavier

    2007-10-01

    The focus of this article is on how one group of researchersthe DOE SciDAC Visualization and Analytics Center for EnablingTechnologies (VACET) is tackling the daunting task of enabling knowledgediscovery through visualization and analytics on some of the world slargest and most complex datasets and on some of the world's largestcomputational platforms. As a Center for Enabling Technology, VACET smission is the creation of usable, production-quality visualization andknowledge discovery software infrastructure that runs on large, parallelcomputer systems at DOE's Open Computing facilities and that providessolutions to challenging visual data exploration and knowledge discoveryneeds of modern science, particularly the DOE sciencecommunity.

  3. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  4. Computing Advances in the Teaching of Chemistry.

    ERIC Educational Resources Information Center

    Baskett, W. P.; Matthews, G. P.

    1984-01-01

    Discusses three trends in computer-oriented chemistry instruction: (1) availability of interfaces to integrate computers with experiments; (2) impact of the development of higher resolution graphics and greater memory capacity; and (3) role of videodisc technology on computer assisted instruction. Includes program listings for auto-titration and…

  5. Nuclear Physics in the SciDAC Era

    SciTech Connect

    Robert Edwards

    2009-08-01

    Lattice QCD currently provides our only means of solving QCD (Quantum Chromo Dynamics) -- the theory of the strong nuclear force -- in the low-energy regime, and thus of crucial importance for theoretical and experimental research programs in High Energy and Nuclear Physics. Under the SciDAC program, a software infrastructure has been developed for lattice QCD that effectively utilize the capabilities of the INCITE facilities. These developments have enabled a new generation of Nuclear Physics calculations investigating the spectrum and structure of matter, such as the origin of mass and spin. This software infrastructure is described and recent results are reviewed.

  6. Advancing crime scene computer forensics techniques

    NASA Astrophysics Data System (ADS)

    Hosmer, Chet; Feldman, John; Giordano, Joe

    1999-02-01

    Computers and network technology have become inexpensive and powerful tools that can be applied to a wide range of criminal activity. Computers have changed the world's view of evidence because computers are used more and more as tools in committing `traditional crimes' such as embezzlements, thefts, extortion and murder. This paper will focus on reviewing the current state-of-the-art of the data recovery and evidence construction tools used in both the field and laboratory for prosection purposes.

  7. SciDAC's Earth System Grid Center for Enabling Technologies Semi-Annual Progress Report for the Period October 1, 2009 through March 31, 2010

    SciTech Connect

    Williams, Dean N.; Foster, I. T.; Middleton, D. E.; Ananthakrishnan, R.; Siebenlist, F.; Shoshani, A.; Sim, A.; Bell, G.; Drach, R.; Ahrens, J.; Jones, P.; Brown, D.; Chastang, J.; Cinquini, L.; Fox, P.; Harper, D.; Hook, N.; Nienhouse, E.; Strand, G.; West, P.; Wilcox, H.; Wilhelmi, N.; Zednik, S.; Hankin, S.; Schweitzer, R.; Bernholdt, D.; Chen, M.; Miller, R.; Shipman, G.; Wang, F.; Bharathi, S.; Chervenak, A.; Schuler, R.; Su, M.

    2010-04-21

    This report summarizes work carried out by the ESG-CET during the period October 1, 2009 through March 31, 2009. It includes discussion of highlights, overall progress, period goals, collaborations, papers, and presentations. To learn more about our project, and to find previous reports, please visit the Earth System Grid Center for Enabling Technologies (ESG-CET) website. This report will be forwarded to the DOE SciDAC program management, the Office of Biological and Environmental Research (OBER) program management, national and international collaborators and stakeholders (e.g., the Community Climate System Model (CCSM), the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5), the Climate Science Computational End Station (CCES), the SciDAC II: A Scalable and Extensible Earth System Model for Climate Change Science, the North American Regional Climate Change Assessment Program (NARCCAP), and other wide-ranging climate model evaluation activities).

  8. Component-based software for high-performance scientific computing

    NASA Astrophysics Data System (ADS)

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis; Janssen, Curtis L.; Kenny, Joseph P.; Krishnan, Manojkumar; Kohl, James A.; Kumfert, Gary; Curfman McInnes, Lois; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  9. Aerodynamic optimization studies on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  10. Advances in Monte Carlo computer simulation

    NASA Astrophysics Data System (ADS)

    Swendsen, Robert H.

    2011-03-01

    Since the invention of the Metropolis method in 1953, Monte Carlo methods have been shown to provide an efficient, practical approach to the calculation of physical properties in a wide variety of systems. In this talk, I will discuss some of the advances in the MC simulation of thermodynamics systems, with an emphasis on optimization to obtain a maximum of useful information.

  11. Evaluation of Advanced Computing Techniques and Technologies: Reconfigurable Computing

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl

    2003-01-01

    The focus of this project was to survey the technology of reconfigurable computing determine its level of maturity and suitability for NASA applications. To better understand and assess the effectiveness of the reconfigurable design paradigm that is utilized within the HAL-15 reconfigurable computer system. This system was made available to NASA MSFC for this purpose, from Star Bridge Systems, Inc. To implement on at least one application that would benefit from the performance levels that are possible with reconfigurable hardware. It was originally proposed that experiments in fault tolerance and dynamically reconfigurability would be perform but time constraints mandated that these be pursued as future research.

  12. Space data systems: Advanced flight computers

    NASA Technical Reports Server (NTRS)

    Benz, Harry F.

    1991-01-01

    The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: technology challenges; state-of-the-art assessment; program description; relationship to external programs; and cooperation and coordination effort.

  13. Advances in Computer-Supported Learning

    ERIC Educational Resources Information Center

    Neto, Francisco; Brasileiro, Francisco

    2007-01-01

    The Internet and growth of computer networks have eliminated geographic barriers, creating an environment where education can be brought to a student no matter where that student may be. The success of distance learning programs and the availability of many Web-supported applications and multimedia resources have increased the effectiveness of…

  14. Advanced Computing Tools and Models for Accelerator Physics

    SciTech Connect

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  15. Advances in Computationally Modeling Human Oral Bioavailability

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2015-01-01

    Although significant progress has been made in experimental high throughput screening (HTS) of ADME (absorption, distribution, metabolism, excretion) and pharmacokinetic properties, the ADME and Toxicity (ADME-Tox) in silico modeling is still indispensable in drug discovery as it can guide us to wisely select drug candidates prior to expensive ADME screenings and clinical trials. Compared to other ADME-Tox properties, human oral bioavailability (HOBA) is particularly important but extremely difficult to predict. In this paper, the advances in human oral bioavailability modeling will be reviewed. Moreover, our deep insight on how to construct more accurate and reliable HOBA QSAR and classification models will also discussed. PMID:25582307

  16. Interoperable geometry and mesh components for SciDAC applications

    NASA Astrophysics Data System (ADS)

    Tautges, T. J.; Knupp, P.; Kraftcheck, J. A.; Kim, H. J.

    2005-01-01

    Software components for representing and evaluating geometry (TSTTG/CGM) and finite element mesh (TSTTM/MOAB), and a higher-level component for relations between the two (TSTTR/LASSO), have been combined with electromagnetic modelling and optimization techniques, to form a SciDAC shape optimization application. The TSTT data model described in this paper allows components involved in the shape optimization application to be coupled at a variety of levels, from coarse black-box coupling (e.g. to generate a model accelerator cavity using TSTTG) to very fine-grained coupling (e.g. smoothing individual mesh elements based in part on geometric surface normals at mesh vertices). Despite this flexibility, the TSTT data model uses only four fundamental data types (entities, sets, tags, and the interface object itself). We elaborate on the design and implementation of effective components in the context of this application, and show how our simple but flexible data model facilitates these efforts.

  17. Advances in computational studies of energy materials.

    PubMed

    Catlow, C R A; Guo, Z X; Miskufova, M; Shevlin, S A; Smith, A G H; Sokol, A A; Walsh, A; Wilson, D J; Woodley, S M

    2010-07-28

    We review recent developments and applications of computational modelling techniques in the field of materials for energy technologies including hydrogen production and storage, energy storage and conversion, and light absorption and emission. In addition, we present new work on an Sn2TiO4 photocatalyst containing an Sn(II) lone pair, new interatomic potential models for SrTiO3 and GaN, an exploration of defects in the kesterite/stannite-structured solar cell absorber Cu2ZnSnS4, and report details of the incorporation of hydrogen into Ag2O and Cu2O. Special attention is paid to the modelling of nanostructured systems, including ceria (CeO2, mixed Ce(x)O(y) and Ce2O3) and group 13 sesquioxides. We consider applications based on both interatomic potential and electronic structure methodologies; and we illustrate the increasingly quantitative and predictive nature of modelling in this field. PMID:20566517

  18. Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Damevski, Kostadin

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  19. State of the Art in EM Field Computation

    SciTech Connect

    Ng, C.; Akcelik, V.; Candel, A.; Chen, S.; Folwell, N.; Ge, L.; Guetz, A.; Jiang, H.; Kabel, A.; Lee, L.-Q.; Li, Z.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Xiao, L.; Ko, K.; /SLAC

    2006-09-25

    This paper presents the advances in electromagnetic (EM) field computation that have been enabled by the US DOE SciDAC Accelerator Science and Technology project which supports the development and application of a suite of electromagnetic codes based on the higher-order finite element method. Implemented on distributed memory supercomputers, this state of the art simulation capability has produced results which are of great interest to accelerator designers and with realism previously not possible with standard codes. Examples from work on the International Linear Collider (ILC) project are described.

  20. Use of advanced computers for aerodynamic flow simulation

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F.

    1980-01-01

    The current and projected use of advanced computers for large-scale aerodynamic flow simulation applied to engineering design and research is discussed. The design use of mature codes run on conventional, serial computers is compared with the fluid research use of new codes run on parallel and vector computers. The role of flow simulations in design is illustrated by the application of a three dimensional, inviscid, transonic code to the Sabreliner 60 wing redesign. Research computations that include a more complete description of the fluid physics by use of Reynolds averaged Navier-Stokes and large-eddy simulation formulations are also presented. Results of studies for a numerical aerodynamic simulation facility are used to project the feasibility of design applications employing these more advanced three dimensional viscous flow simulations.

  1. Computer-Assisted Foreign Language Teaching and Learning: Technological Advances

    ERIC Educational Resources Information Center

    Zou, Bin; Xing, Minjie; Wang, Yuping; Sun, Mingyu; Xiang, Catherine H.

    2013-01-01

    Computer-Assisted Foreign Language Teaching and Learning: Technological Advances highlights new research and an original framework that brings together foreign language teaching, experiments and testing practices that utilize the most recent and widely used e-learning resources. This comprehensive collection of research will offer linguistic…

  2. Advanced Placement Computer Science with Pascal. Volume 2. Experimental Edition.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY.

    This curriculum guide presents 100 lessons for an advanced placement course on programming in Pascal. Some of the topics covered include arrays, sorting, strings, sets, records, computers in society, files, stacks, queues, linked lists, binary trees, searching, hashing, and chaining. Performance objectives, vocabulary, motivation, aim,…

  3. Advanced computational tools for 3-D seismic analysis

    SciTech Connect

    Barhen, J.; Glover, C.W.; Protopopescu, V.A.

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  4. Final Technical Report - SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodynamics

    SciTech Connect

    Schnack, Dalton D.

    2012-07-01

    Final technical report for research performed by Dr. Thomas G. Jenkins in collaboration with Professor Dalton D. Schnack on SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodyanics, DE-FC02-06ER54899, for the period of 8/15/06 - 8/14/11. This report centers on the Slow MHD physics campaign work performed by Dr. Jenkins while at UW-Madison and then at Tech-X Corporation. To make progress on the problem of RF induced currents affect magnetic island evolution in toroidal plasmas, a set of research approaches are outlined. Three approaches can be addressed in parallel. These are: (1) Analytically prescribed additional term in Ohm's law to model the effect of localized ECCD current drive; (2) Introduce an additional evolution equation for the Ohm's law source term. Establish a RF source 'box' where information from the RF code couples to the fluid evolution; and (3) Carry out a more rigorous analytic calculation treating the additional RF terms in a closure problem. These approaches rely on the necessity of reinvigorating the computation modeling efforts of resistive and neoclassical tearing modes with present day versions of the numerical tools. For the RF community, the relevant action item is - RF ray tracing codes need to be modified so that general three-dimensional spatial information can be obtained. Further, interface efforts between the two codes require work as well as an assessment as to the numerical stability properties of the procedures to be used.

  5. [Activities of Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  6. SciDAC Visualization and Analytics Center for Enabling Technologies

    SciTech Connect

    Joy, Kenneth I.

    2014-09-14

    This project focuses on leveraging scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight. Advances in computational technology have resulted in an "information big bang," which in turn has created a significant data understanding challenge. This challenge is widely acknowledged to be one of the primary bottlenecks in contemporary science. The vision for our Center is to respond directly to that challenge by adapting, extending, creating when necessary and deploying visualization and data understanding technologies for our science stakeholders. Using an organizational model as a Visualization and Analytics Center for Enabling Technologies (VACET), we are well positioned to be responsive to the needs of a diverse set of scientific stakeholders in a coordinated fashion using a range of visualization, mathematics, statistics, computer and computational science and data management technologies.

  7. Advances in Numerical Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1997-01-01

    Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.

  8. DOE SciDAC's Earth System Grid Center for Enabling Technologies Final Report

    SciTech Connect

    Williams, Dean N.

    2011-09-27

    report at the end of project funding. To continue to serve the climate-science community, we are currently seeking additional funding. Such funding would allow us to maintain and enhance ESGF production and operation of this vital endeavor of cataloging, serving, and analyzing ultra-scale climate science data. At this time, the entire ESG-CET team would like to take this opportunity to sincerely thank our funding agencies in the DOE Scientific Discovery through Advanced Computing (SciDAC) program and the Office of Biological and Environmental Research (OBER) - as well as our national and international collaborators, stakeholders, and partners - for allowing us to work with you and serve the community these past several years.

  9. The Design and Implementation of NASA's Advanced Flight Computing Module

    NASA Technical Reports Server (NTRS)

    Alkakaj, Leon; Straedy, Richard; Jarvis, Bruce

    1995-01-01

    This paper describes a working flight computer Multichip Module developed jointly by JPL and TRW under their respective research programs in a collaborative fashion. The MCM is fabricated by nCHIP and is packaged within a 2 by 4 inch Al package from Coors. This flight computer module is one of three modules under development by NASA's Advanced Flight Computer (AFC) program. Further development of the Mass Memory and the programmable I/O MCM modules will follow. The three building block modules will then be stacked into a 3D MCM configuration. The mass and volume of the flight computer MCM achieved at 89 grams and 1.5 cubic inches respectively, represent a major enabling technology for future deep space as well as commercial remote sensing applications.

  10. Activities of the Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph

    1994-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

  11. Advanced computer architecture specification for automated weld systems

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.

  12. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  13. Advanced computer modeling techniques expand belt conveyor technology

    SciTech Connect

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  14. Soft computing in design and manufacturing of advanced materials

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y; Vary, Alex

    1993-01-01

    The potential of fuzzy sets and neural networks, often referred to as soft computing, for aiding in all aspects of manufacturing of advanced materials like ceramics is addressed. In design and manufacturing of advanced materials, it is desirable to find which of the many processing variables contribute most to the desired properties of the material. There is also interest in real time quality control of parameters that govern material properties during processing stages. The concepts of fuzzy sets and neural networks are briefly introduced and it is shown how they can be used in the design and manufacturing processes. These two computational methods are alternatives to other methods such as the Taguchi method. The two methods are demonstrated by using data collected at NASA Lewis Research Center. Future research directions are also discussed.

  15. Final Report for DOE Project: Portal Web Services: Support of DOE SciDAC Collaboratories

    SciTech Connect

    Mary Thomas, PI; Geoffrey Fox, Co-PI; Gannon, D; Pierce, M; Moore, R; Schissel, D; Boisseau, J

    2007-10-01

    Grid portals provide the scientific community with familiar and simplified interfaces to the Grid and Grid services, and it is important to deploy grid portals onto the SciDAC grids and collaboratories. The goal of this project is the research, development and deployment of interoperable portal and web services that can be used on SciDAC National Collaboratory grids. This project has four primary task areas: development of portal systems; management of data collections; DOE science application integration; and development of web and grid services in support of the above activities.

  16. Center for Technology for Advanced Scientific Component Software (TASCS) Consolidated Progress Report July 2006 - March 2009

    SciTech Connect

    Bernholdt, D E; McInnes, L C; Govindaraju, M; Bramley, R; Epperly, T; Kohl, J A; Nieplocha, J; Armstrong, R; Shasharina, S; Sussman, A L; Sottile, M; Damevski, K

    2009-04-14

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  17. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Kostadin, Damevski

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  18. The advanced computational testing and simulation toolkit (ACTS)

    SciTech Connect

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  19. High-Performance Computing for Advanced Smart Grid Applications

    SciTech Connect

    Huang, Zhenyu; Chen, Yousu

    2012-07-06

    The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

  20. Whole-genome CNV analysis: advances in computational approaches

    PubMed Central

    Pirooznia, Mehdi; Goes, Fernando S.; Zandi, Peter P.

    2015-01-01

    Accumulating evidence indicates that DNA copy number variation (CNV) is likely to make a significant contribution to human diversity and also play an important role in disease susceptibility. Recent advances in genome sequencing technologies have enabled the characterization of a variety of genomic features, including CNVs. This has led to the development of several bioinformatics approaches to detect CNVs from next-generation sequencing data. Here, we review recent advances in CNV detection from whole genome sequencing. We discuss the informatics approaches and current computational tools that have been developed as well as their strengths and limitations. This review will assist researchers and analysts in choosing the most suitable tools for CNV analysis as well as provide suggestions for new directions in future development. PMID:25918519

  1. Software for the ACP (Advanced Computer Program) multiprocessor system

    SciTech Connect

    Biel, J.; Areti, H.; Atac, R.; Cook, A.; Fischler, M.; Gaines, I.; Kaliher, C.; Hance, R.; Husby, D.; Nash, T.

    1987-02-02

    Software has been developed for use with the Fermilab Advanced Computer Program (ACP) multiprocessor system. The software was designed to make a system of a hundred independent node processors as easy to use as a single, powerful CPU. Subroutines have been developed by which a user's host program can send data to and get results from the program running in each of his ACP node processors. Utility programs make it easy to compile and link host and node programs, to debug a node program on an ACP development system, and to submit a debugged program to an ACP production system.

  2. Advances in Computational Stability Analysis of Composite Aerospace Structures

    SciTech Connect

    Degenhardt, R.; Araujo, F. C. de

    2010-09-30

    European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. For unstiffened cylindrical composite shells a proposal for a new design method is presented.

  3. Computer modeling for advanced life support system analysis.

    PubMed

    Drysdale, A

    1997-01-01

    This article discusses the equivalent mass approach to advanced life support system analysis, describes a computer model developed to use this approach, and presents early results from modeling the NASA JSC BioPlex. The model is built using an object oriented approach and G2, a commercially available modeling package Cost factor equivalencies are given for the Volosin scenarios. Plant data from NASA KSC and Utah State University (USU) are used, together with configuration data from the BioPlex design effort. Initial results focus on the importance of obtaining high plant productivity with a flight-like configuration. PMID:11540448

  4. Advances in parallel computer technology for desktop atmospheric dispersion models

    SciTech Connect

    Bian, X.; Ionescu-Niscov, S.; Fast, J.D.; Allwine, K.J.

    1996-12-31

    Desktop models are those models used by analysts with varied backgrounds, for performing, for example, air quality assessment and emergency response activities. These models must be robust, well documented, have minimal and well controlled user inputs, and have clear outputs. Existing coarse-grained parallel computers can provide significant increases in computation speed in desktop atmospheric dispersion modeling without considerable increases in hardware cost. This increased speed will allow for significant improvements to be made in the scientific foundations of these applied models, in the form of more advanced diffusion schemes and better representation of the wind and turbulence fields. This is especially attractive for emergency response applications where speed and accuracy are of utmost importance. This paper describes one particular application of coarse-grained parallel computer technology to a desktop complex terrain atmospheric dispersion modeling system. By comparing performance characteristics of the coarse-grained parallel version of the model with the single-processor version, we will demonstrate that applying coarse-grained parallel computer technology to desktop atmospheric dispersion modeling systems will allow us to address critical issues facing future requirements of this class of dispersion models.

  5. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    SciTech Connect

    Reed, Daniel; Berzins, Martin; Pennington, Robert; Sarkar, Vivek; Taylor, Valerie

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  6. Advances in Parallel Electromagnetic Codes for Accelerator Science and Development

    SciTech Connect

    Ko, Kwok; Candel, Arno; Ge, Lixin; Kabel, Andreas; Lee, Rich; Li, Zenghai; Ng, Cho; Rawat, Vineet; Schussman, Greg; Xiao, Liling; /SLAC

    2011-02-07

    Over a decade of concerted effort in code development for accelerator applications has resulted in a new set of electromagnetic codes which are based on higher-order finite elements for superior geometry fidelity and better solution accuracy. SLAC's ACE3P code suite is designed to harness the power of massively parallel computers to tackle large complex problems with the increased memory and solve them at greater speed. The US DOE supports the computational science R&D under the SciDAC project to improve the scalability of ACE3P, and provides the high performance computing resources needed for the applications. This paper summarizes the advances in the ACE3P set of codes, explains the capabilities of the modules, and presents results from selected applications covering a range of problems in accelerator science and development important to the Office of Science.

  7. Recent Advances in Computational Mechanics of the Human Knee Joint

    PubMed Central

    Kazemi, M.; Dabiri, Y.; Li, L. P.

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling. PMID:23509602

  8. NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Lee-Rausch, E. M.

    2012-01-01

    Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.

  9. Final Report for "Tech-X Corporation work for the SciDAC Center for Simulation of RF Wave Interactions with Magnetohydrodynamics (SWIM)"

    SciTech Connect

    Jenkins, Thomas G.; Kruger, Scott E.

    2013-03-25

    Work carried out by Tech-X Corporation for the DoE SciDAC Center for Simulation of RF Wave Interactions with Magnetohydrodynamics (SWIM; U.S. DoE Office of Science Award Number DE-FC02-06ER54899) is summarized and is shown to fulfil the project objectives. The Tech-X portion of the SWIM work focused on the development of analytic and computational approaches to study neoclassical tearing modes and their interaction with injected electron cyclotron current drive. Using formalism developed by Hegna, Callen, and Ramos [Phys. Plasmas 16, 112501 (2009); Phys. Plasmas 17, 082502 (2010); Phys. Plasmas 18, 102506 (2011)], analytic approximations for the RF interaction were derived and the numerical methods needed to implement these interactions in the NIMROD extended MHD code were developed. Using the SWIM IPS framework, NIMROD has successfully coupled to GENRAY, an RF ray tracing code; additionally, a numerical control system to trigger the RF injection, adjustment, and shutdown in response to tearing mode activity has been developed. We discuss these accomplishments, as well as prospects for ongoing future research that this work has enabled (which continue in a limited fashion under the SciDAC Center for Extended Magnetohydrodynamic Modeling (CEMM) project and under a baseline theory grant). Associated conference presentations, published articles, and publications in progress are also listed.

  10. Advanced information processing system: Inter-computer communication services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Masotto, Tom; Sims, J. Terry; Whittredge, Roy; Alger, Linda S.

    1991-01-01

    The purpose is to document the functional requirements and detailed specifications for the Inter-Computer Communications Services (ICCS) of the Advanced Information Processing System (AIPS). An introductory section is provided to outline the overall architecture and functional requirements of the AIPS and to present an overview of the ICCS. An overview of the AIPS architecture as well as a brief description of the AIPS software is given. The guarantees of the ICCS are provided, and the ICCS is described as a seven-layered International Standards Organization (ISO) Model. The ICCS functional requirements, functional design, and detailed specifications as well as each layer of the ICCS are also described. A summary of results and suggestions for future work are presented.

  11. 5 CFR 550.404 - Computation of advance payments and evacuation payments; time periods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Computation of advance payments and... Computation of advance payments and evacuation payments; time periods. (a) Payments shall be based on the rate... others, when applicable, shall be made before advance payments or evacuation payments are made....

  12. 5 CFR 550.404 - Computation of advance payments and evacuation payments; time periods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Computation of advance payments and... Computation of advance payments and evacuation payments; time periods. (a) Payments shall be based on the rate... others, when applicable, shall be made before advance payments or evacuation payments are made....

  13. Reliability of an interactive computer program for advance care planning.

    PubMed

    Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-06-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  14. Reliability of an Interactive Computer Program for Advance Care Planning

    PubMed Central

    Levi, Benjamin H.; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-01-01

    Abstract Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83–0.95, and 0.86–0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  15. Optical design and characterization of an advanced computational imaging system

    NASA Astrophysics Data System (ADS)

    Shepard, R. Hamilton; Fernandez-Cull, Christy; Raskar, Ramesh; Shi, Boxin; Barsi, Christopher; Zhao, Hang

    2014-09-01

    We describe an advanced computational imaging system with an optical architecture that enables simultaneous and dynamic pupil-plane and image-plane coding accommodating several task-specific applications. We assess the optical requirement trades associated with custom and commercial-off-the-shelf (COTS) optics and converge on the development of two low-cost and robust COTS testbeds. The first is a coded-aperture programmable pixel imager employing a digital micromirror device (DMD) for image plane per-pixel oversampling and spatial super-resolution experiments. The second is a simultaneous pupil-encoded and time-encoded imager employing a DMD for pupil apodization or a deformable mirror for wavefront coding experiments. These two testbeds are built to leverage two MIT Lincoln Laboratory focal plane arrays - an orthogonal transfer CCD with non-uniform pixel sampling and on-chip dithering and a digital readout integrated circuit (DROIC) with advanced on-chip per-pixel processing capabilities. This paper discusses the derivation of optical component requirements, optical design metrics, and performance analyses for the two testbeds built.

  16. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  17. A NATIONAL COLLABORATORY TO ADVANCE THE SCIENCE OF HIGH TEMPERATURE PLASMA PHYSICS FOR MAGNETIC FUSION

    SciTech Connect

    Allen R. Sanderson; Christopher R. Johnson

    2006-08-01

    This report summarizes the work of the University of Utah, which was a member of the National Fusion Collaboratory (NFC) Project funded by the United States Department of Energy (DOE) under the Scientific Discovery through Advanced Computing Program (SciDAC) to develop a persistent infrastructure to enable scientific collaboration for magnetic fusion research. A five year project that was initiated in 2001, it the NFC built on the past collaborative work performed within the U.S. fusion community and added the component of computer science research done with the USDOE Office of Science, Office of Advanced Scientific Computer Research. The project was itself a collaboration, itself uniting fusion scientists from General Atomics, MIT, and PPPL and computer scientists from ANL, LBNL, and Princeton University, and the University of Utah to form a coordinated team. The group leveraged existing computer science technology where possible and extended or created new capabilities where required. The complete finial report is attached as an addendum. The In the collaboration, the primary technical responsibility of the University of Utah in the collaboration was to develop and deploy an advanced scientific visualization service. To achieve this goal, the SCIRun Problem Solving Environment (PSE) is used on FusionGrid for an advanced scientific visualization service. SCIRun is open source software that gives the user the ability to create complex 3D visualizations and 2D graphics. This capability allows for the exploration of complex simulation results and the comparison of simulation and experimental data. SCIRun on FusionGrid gives the scientist a no-license-cost visualization capability that rivals present day commercial visualization packages. To accelerate the usage of SCIRun within the fusion community, a stand-alone application built on top of SCIRun was developed and deployed. This application, FusionViewer, allows users who are unfamiliar with SCIRun to quickly create

  18. TerraFERMA: Harnessing Advanced Computational Libraries in Earth Science

    NASA Astrophysics Data System (ADS)

    Wilson, C. R.; Spiegelman, M.; van Keken, P.

    2012-12-01

    Many important problems in Earth sciences can be described by non-linear coupled systems of partial differential equations. These "multi-physics" problems include thermo-chemical convection in Earth and planetary interiors, interactions of fluids and magmas with the Earth's mantle and crust and coupled flow of water and ice. These problems are of interest to a large community of researchers but are complicated to model and understand. Much of this complexity stems from the nature of multi-physics where small changes in the coupling between variables or constitutive relations can lead to radical changes in behavior, which in turn affect critical computational choices such as discretizations, solvers and preconditioners. To make progress in understanding such coupled systems requires a computational framework where multi-physics problems can be described at a high-level while maintaining the flexibility to easily modify the solution algorithm. Fortunately, recent advances in computational science provide a basis for implementing such a framework. Here we present the Transparent Finite Element Rapid Model Assembler (TerraFERMA), which leverages several advanced open-source libraries for core functionality. FEniCS (fenicsproject.org) provides a high level language for describing the weak forms of coupled systems of equations, and an automatic code generator that produces finite element assembly code. PETSc (www.mcs.anl.gov/petsc) provides a wide range of scalable linear and non-linear solvers that can be composed into effective multi-physics preconditioners. SPuD (amcg.ese.ic.ac.uk/Spud) is an application neutral options system that provides both human and machine-readable interfaces based on a single xml schema. Our software integrates these libraries and provides the user with a framework for exploring multi-physics problems. A single options file fully describes the problem, including all equations, coefficients and solver options. Custom compiled applications are

  19. Advances in computer technology: impact on the practice of medicine.

    PubMed

    Groth-Vasselli, B; Singh, K; Farnsworth, P N

    1995-01-01

    Advances in computer technology provide a wide range of applications which are revolutionizing the practice of medicine. The development of new software for the office creates a web of communication among physicians, staff members, health care facilities and associated agencies. This provides the physician with the prospect of a paperless office. At the other end of the spectrum, the development of 3D work stations and software based on computational chemistry permits visualization of protein molecules involved in disease. Computer assisted molecular modeling has been used to construct working 3D models of lens alpha-crystallin. The 3D structure of alpha-crystallin is basic to our understanding of the molecular mechanisms involved in lens fiber cell maturation, stabilization of the inner nuclear region, the maintenance of lens transparency and cataractogenesis. The major component of the high molecular weight aggregates that occur during cataractogenesis is alpha-crystallin subunits. Subunits of alpha-crystallin occur in other tissues of the body. In the central nervous system accumulation of these subunits in the form of dense inclusion bodies occurs in pathological conditions such as Alzheimer's disease, Huntington's disease, multiple sclerosis and toxoplasmosis (Iwaki, Wisniewski et al., 1992), as well as neoplasms of astrocyte origin (Iwaki, Iwaki, et al., 1991). Also cardiac ischemia is associated with an increased alpha B synthesis (Chiesi, Longoni et al., 1990). On a more global level, the molecular structure of alpha-crystallin may provide information pertaining to the function of small heat shock proteins, hsp, in maintaining cell stability under the stress of disease.

  20. SciDAC Visualization and Analytics Center for EnablingTechnology

    SciTech Connect

    Bethel, E. Wes; Johnson, Chris; Joy, Ken; Ahern, Sean; Pascucci,Valerio; Childs, Hank; Cohen, Jonathan; Duchaineau, Mark; Hamann, Bernd; Hansen, Charles; Laney, Dan; Lindstrom, Peter; Meredith, Jeremy; Ostrouchov, George; Parker, Steven; Silva, Claudio; Sanderson, Allen; Tricoche, Xavier

    2006-11-28

    The SciDAC2 Visualization and Analytics Center for EnablingTechnologies (VACET) began operation on 10/1/2006. This document, dated11/27/2006, is the first version of the VACET project management plan. Itwas requested by and delivered to ASCR/DOE. It outlines the Center'saccomplishments in the first six weeks of operation along with broadobjectives for the upcoming future (12-24 months).

  1. Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing

    SciTech Connect

    Fletcher, James H.; Cox, Philip; Harrington, William J; Campbell, Joseph L

    2013-09-03

    ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focused on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel

  2. Quantitative Computed Tomography and Image Analysis for Advanced Muscle Assessment

    PubMed Central

    Edmunds, Kyle Joseph; Gíslason, Magnus K.; Arnadottir, Iris D.; Marcante, Andrea; Piccione, Francesco; Gargiulo, Paolo

    2016-01-01

    Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT) and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU) values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration. PMID:27478562

  3. Recent advances in data assimilation in computational geodynamic models

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, Alik

    2010-05-01

    . The QRV method was most recently introduced in geodynamic modelling (Ismail-Zadeh et al., 2007, 2008; Tantsyrev, 2008; Glisovic et al., 2009). The advances in computational geodynamics and in data assimilation attract an interest of the community dealing with lithosphere, mantle and core dynamics.

  4. Important advances in technology and unique applications to cardiovascular computed tomography.

    PubMed

    Chaikriangkrai, Kongkiat; Choi, Su Yeon; Nabi, Faisal; Chang, Su Min

    2014-01-01

    For the past decade, multidetector cardiac computed tomography and its main application, coronary computed tomography angiography, have been established as a noninvasive technique for anatomical assessment of coronary arteries. This new era of coronary artery evaluation by coronary computed tomography angiography has arisen from the rapid advancement in computed tomography technology, which has led to massive diagnostic and prognostic clinical studies in various patient populations. This article gives a brief overview of current multidetector cardiac computed tomography systems, developing cardiac computed tomography technologies in both hardware and software fields, innovative radiation exposure reduction measures, multidetector cardiac computed tomography functional studies, and their newer clinical applications beyond coronary computed tomography angiography. PMID:25574342

  5. Some Hail 'Computational Science' as Biggest Advance Since Newton, Galileo.

    ERIC Educational Resources Information Center

    Turner, Judith Axler

    1987-01-01

    Computational science is defined as science done on a computer. A computer can serve as a laboratory for researchers who cannot experiment with their subjects, and as a calculator for those who otherwise might need centuries to solve some problems mathematically. The National Science Foundation's support of supercomputers is discussed. (MLW)

  6. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  7. Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual

    NASA Technical Reports Server (NTRS)

    Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.

    1986-01-01

    The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.

  8. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  9. Advanced Scientific Computing Environment Team new scientific database management task

    SciTech Connect

    Church, J.P.; Roberts, J.C.; Sims, R.N.; Smetana, A.O.; Westmoreland, B.W.

    1991-06-01

    The mission of the ASCENT Team is to continually keep pace with, evaluate, and select emerging computing technologies to define and implement prototypic scientific environments that maximize the ability of scientists and engineers to manage scientific data. These environments are to be implemented in a manner consistent with the site computing architecture and standards and NRTSC/SCS strategic plans for scientific computing. The major trends in computing hardware and software technology clearly indicate that the future computer'' will be a network environment that comprises supercomputers, graphics boxes, mainframes, clusters, workstations, terminals, and microcomputers. This network computer'' will have an architecturally transparent operating system allowing the applications code to run on any box supplying the required computing resources. The environment will include a distributed database and database managing system(s) that permits use of relational, hierarchical, object oriented, GIS, et al, databases. To reach this goal requires a stepwise progression from the present assemblage of monolithic applications codes running on disparate hardware platforms and operating systems. The first steps include converting from the existing JOSHUA system to a new J80 system that complies with modern language standards, development of a new J90 prototype to provide JOSHUA capabilities on Unix platforms, development of portable graphics tools to greatly facilitate preparation of input and interpretation of output; and extension of Jvv'' concepts and capabilities to distributed and/or parallel computing environments.

  10. The Advance of Computing from the Ground to the Cloud

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2009-01-01

    A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

  11. Computational Enzyme Design: Advances, hurdles and possible ways forward

    PubMed Central

    Linder, Mats

    2012-01-01

    This mini review addresses recent developments in computational enzyme design. Successful protocols as well as known issues and limitations are discussed from an energetic perspective. It will be argued that improved results can be obtained by including a dynamic treatment in the design protocol. Finally, a molecular dynamics-based approach for evaluating and refining computational designs is presented. PMID:24688650

  12. SciDAC Center for Plasma Edge Simulation

    SciTech Connect

    Lin, Zhihong

    2013-12-17

    This project with a total funding of $592,998 for six years has partially supported four postdoctoral researchers at the University of California, Irvine (UCI). The UCI team has formulated electrostatic and electromagnetic global gyrokinetic particle simulation models with kinetic electrons, implemented these models in the edge code XGC1, performed benchmark between GTC and XGC1, developed computational tools for gyrokinetic particle simulation in tokamak edge geometry, and initiated preparatory study of edge turbulence using GTC code. The research results has been published in 12 papers and presented at many international and national conferences.

  13. How the Common Component Architecture Advances Compuational Science

    SciTech Connect

    Kumfert, G; Bernholdt, D; Epperly, T; Kohl, J; McInnes, L C; Parker, S; Ray, J

    2006-06-19

    Computational chemists are using Common Component Architecture (CCA) technology to increase the parallel scalability of their application ten-fold. Combustion researchers are publishing science faster because the CCA manages software complexity for them. Both the solver and meshing communities in SciDAC are converging on community interface standards as a direct response to the novel level of interoperability that CCA presents. Yet, there is much more to do before component technology becomes mainstream computational science. This paper highlights the impact that the CCA has made on scientific applications, conveys some lessons learned from five years of the SciDAC program, and previews where applications could go with the additional capabilities that the CCA has planned for SciDAC 2.

  14. Advances in computational design and analysis of airbreathing propulsion systems

    NASA Technical Reports Server (NTRS)

    Klineberg, John M.

    1989-01-01

    The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.

  15. The Impact of Advance Organizers upon Students' Achievement in Computer-Assisted Video Instruction.

    ERIC Educational Resources Information Center

    Saidi, Houshmand

    1994-01-01

    Describes a study of undergraduates that was conducted to determine the impact of advance organizers on students' achievement in computer-assisted video instruction (CAVI). Treatments of the experimental and control groups are explained, and results indicate that advance organizers do not facilitate near-transfer of rule-learning in CAVI.…

  16. Some recent advances in computational aerodynamics for helicopter applications

    NASA Technical Reports Server (NTRS)

    Mccroskey, W. J.; Baeder, J. D.

    1985-01-01

    The growing application of computational aerodynamics to nonlinear helicopter problems is outlined, with particular emphasis on several recent quasi-two-dimensional examples that used the thin-layer Navier-Stokes equations and an eddy-viscosity model to approximate turbulence. Rotor blade section characteristics can now be calculated accurately over a wide range of transonic flow conditions. However, a finite-difference simulation of the complete flow field about a helicopter in forward flight is not currently feasible, despite the impressive progress that is being made in both two and three dimensions. The principal limitations are today's computer speeds and memories, algorithm and solution methods, grid generation, vortex modeling, structural and aerodynamic coupling, and a shortage of engineers who are skilled in both computational fluid dynamics and helicopter aerodynamics and dynamics.

  17. A Computationally Based Approach to Homogenizing Advanced Alloys

    SciTech Connect

    Jablonski, P D; Cowen, C J

    2011-02-27

    We have developed a computationally based approach to optimizing the homogenization heat treatment of complex alloys. The Scheil module within the Thermo-Calc software is used to predict the as-cast segregation present within alloys, and DICTRA (Diffusion Controlled TRAnsformations) is used to model the homogenization kinetics as a function of time, temperature and microstructural scale. We will discuss this approach as it is applied to both Ni based superalloys as well as the more complex (computationally) case of alloys that solidify with more than one matrix phase as a result of segregation. Such is the case typically observed in martensitic steels. With these alloys it is doubly important to homogenize them correctly, especially at the laboratory scale, since they are austenitic at high temperature and thus constituent elements will diffuse slowly. The computationally designed heat treatment and the subsequent verification real castings are presented.

  18. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  19. Advanced Simulation and Computing Co-Design Strategy

    SciTech Connect

    Ang, James A.; Hoang, Thuc T.; Kelly, Suzanne M.; McPherson, Allen; Neely, Rob

    2015-11-01

    This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.

  20. Advanced Computer Image Generation Techniques Exploiting Perceptual Characteristics. Final Report.

    ERIC Educational Resources Information Center

    Stenger, Anthony J.; And Others

    This study suggests and identifies computer image generation (CIG) algorithms for visual simulation that improve the training effectiveness of CIG simulators and identifies areas of basic research in visual perception that are significant for improving CIG technology. The first phase of the project entailed observing three existing CIG simulators.…

  1. MAX - An advanced parallel computer for space applications

    NASA Technical Reports Server (NTRS)

    Lewis, Blair F.; Bunker, Robert L.

    1991-01-01

    MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.

  2. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    ERIC Educational Resources Information Center

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  3. Project T.E.A.M. (Technical Education Advancement Modules). Introduction to Computers.

    ERIC Educational Resources Information Center

    Ellis, Brenda

    This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 3-hour introduction to computers. The purpose is to develop the following competencies: (1) orientation to data processing; (2) use of data entry devices; (3) use of computer menus; and (4) entry of data with accuracy and…

  4. Advanced Computational Thermal Studies and their Assessment for Supercritical-Pressure Reactors (SCRs)

    SciTech Connect

    D. M. McEligot; J. Y. Yoo; J. S. Lee; S. T. Ro; E. Lurien; S. O. Park; R. H. Pletcher; B. L. Smith; P. Vukoslavcevic; J. M. Wallace

    2009-04-01

    The goal of this laboratory / university collaboration of coupled computational and experimental studies is the improvement of predictive methods for supercritical-pressure reactors. The general objective is to develop supporting knowledge needed of advanced computational techniques for the technology development of the concepts and their safety systems.

  5. The ergonomics of computer aided design within advanced manufacturing technology.

    PubMed

    John, P A

    1988-03-01

    Many manufacturing companies have now awakened to the significance of computer aided design (CAD), although the majority of them have only been able to purchase computerised draughting systems of which only a subset produce direct manufacturing data. Such companies are moving steadily towards the concept of computer integrated manufacture (CIM), and this demands CAD to address more than draughting. CAD architects are thus having to rethink the basic specification of such systems, although they typically suffer from an insufficient understanding of the design task and have consequently been working with inadequate specifications. It is at this fundamental level that ergonomics has much to offer, making its contribution by encouraging user-centred design. The discussion considers the relationships between CAD and: the design task; the organisation and people; creativity; and artificial intelligence. It finishes with a summary of the contribution of ergonomics.

  6. Cogeneration computer model assessment: Advanced cogeneration research study

    NASA Technical Reports Server (NTRS)

    Rosenberg, L.

    1983-01-01

    Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.

  7. National facility for advanced computational science: A sustainable path to scientific discovery

    SciTech Connect

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  8. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  9. Modeling emergency department operations using advanced computer simulation systems.

    PubMed

    Saunders, C E; Makens, P K; Leblanc, L J

    1989-02-01

    We developed a computer simulation model of emergency department operations using simulation software. This model uses multiple levels of preemptive patient priority; assigns each patient to an individual nurse and physician; incorporates all standard tests, procedures, and consultations; and allows patient service processes to proceed simultaneously, sequentially, repetitively, or a combination of these. Selected input data, including the number of physicians, nurses, and treatment beds, and the blood test turnaround time, then were varied systematically to determine their simulated effect on patient throughput time, selected queue sizes, and rates of resource utilization. Patient throughput time varied directly with laboratory service times and inversely with the number of physician or nurse servers. Resource utilization rates varied inversely with resource availability, and patient waiting time and patient throughput time varied indirectly with the level of patient acuity. The simulation can be animated on a computer monitor, showing simulated patients, specimens, and staff members moving throughout the ED. Computer simulation is a potentially useful tool that can help predict the results of changes in the ED system without actually altering it and may have implications for planning, optimizing resources, and improving the efficiency and quality of care.

  10. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  11. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  12. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    SciTech Connect

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria; Zhou, Ning; Huang, Zhenyu

    2012-07-26

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulation of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.

  13. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A; Wickett, M E; Duffy, P B; Rotman, D A

    2005-03-03

    The Center for Applied Scientific Computing (CASC) and the LLNL Atmospheric Science Division (ASD) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. As part of LLNL's participation in DOE's Scientific Discovery through Advanced Computing (SciDAC) program, members of CASC and ASD are collaborating with other DOE labs and NCAR in the development of a comprehensive, next-generation global climate model. This model incorporates the most current physics and numerics and capably exploits the latest massively parallel computers. One of LLNL's roles in this collaboration is the scalable parallelization of NASA's finite-volume atmospheric dynamical core. We have implemented multiple two-dimensional domain decompositions, where the different decompositions are connected by high-speed transposes. Additional performance is obtained through shared memory parallelization constructs and one-sided interprocess communication. The finite-volume dynamical core is particularly important to atmospheric chemistry simulations, where LLNL has a leading role.

  14. Vision 20/20: Automation and advanced computing in clinical radiation oncology

    SciTech Connect

    Moore, Kevin L. Moiseenko, Vitali; Kagadis, George C.; McNutt, Todd R.; Mutic, Sasa

    2014-01-15

    This Vision 20/20 paper considers what computational advances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

  15. COMPUTATIONAL SCIENCE CENTER

    SciTech Connect

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

  16. Computational Approaches to Enhance Nanosafety and Advance Nanomedicine

    NASA Astrophysics Data System (ADS)

    Mendoza, Eduardo R.

    With the increasing use of nanoparticles in food processing, filtration/purification and consumer products, as well as the huge potential of their use in nanomedicine, a quantitative understanding of the effects of nanoparticle uptake and transport is needed. We provide examples of novel methods for modeling complex bio-nano interactions which are based on stochastic process algebras. Since model construction presumes sufficient availability of experimental data, recent developments in "nanoinformatics", an emerging discipline analogous to bioinfomatics, in building an accessible information infrastructure are subsequently discussed. Both computational areas offer opportunities for Filipinos to engage in collaborative, cutting edge research in this impactful field.

  17. An integrated computer system for preliminary design of advanced aircraft.

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Sobieszczanski, J.; Landrum, E. J.

    1972-01-01

    A progress report is given on the first phase of a research project to develop a system of Integrated Programs for Aerospace-Vehicle Design (IPAD) which is intended to automate to the largest extent possible the preliminary and detailed design of advanced aircraft. The approach used is to build a pilot system and simultaneously to carry out two major contractual studies to define a practical IPAD system preparatory to programing. The paper summarizes the specifications and goals of the IPAD system, the progress to date, and any conclusion reached regarding its feasibility and scope. Sample calculations obtained with the pilot system are given for aircraft preliminary designs optimized with respect to discipline parameters, such as weight or L/D, and these results are compared with designs optimized with respect to overall performance parameters, such as range or payload.

  18. Inference on arthropod demographic parameters: computational advances using R.

    PubMed

    Maia, Aline De Holanda Nunes; Pazianotto, Ricardo Antonio De Almeida; Luiz, Alfredo José Barreto; Marinho-Prado, Jeanne Scardini; Pervez, Ahmad

    2014-02-01

    We developed a computer program for life table analysis using the open source, free software programming environment R. It is useful to quantify chronic nonlethal effects of treatments on arthropod populations by summarizing information on their survival and fertility in key population parameters referred to as fertility life table parameters. Statistical inference on fertility life table parameters is not trivial because it requires the use of computationally intensive methods for variance estimation. Our codes present some advantages with respect to a previous program developed in Statistical Analysis System. Additional multiple comparison tests were incorporated for the analysis of qualitative factors; a module for regression analysis was implemented, thus, allowing analysis of quantitative factors such as temperature or agrochemical doses; availability is granted for users, once it was developed using an open source, free software programming environment. To illustrate the descriptive and inferential analysis implemented in lifetable.R, we present and discuss two examples: 1) a study quantifying the influence of the proteinase inhibitor berenil on the eucalyptus defoliator Thyrinteina arnobia (Stoll) and 2) a study investigating the influence of temperature on demographic parameters of a predaceous ladybird, Hippodamia variegata (Goeze). PMID:24665730

  19. Recent advances in computer camera methods for machine vision

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1998-10-01

    During the past year, several new computer camera methods (hardware and software) have been developed which have applications in machine vision. These are described below, along with some test results. The improvements are generally in the direction of higher speed and greater parallelism. A PCI interface card has been designed which is adaptable to multiple CCD types, both color and monochrome. A newly designed A/D converter allows for a choice of 8 or 10-bit conversion resolution and a choice of two different analog inputs. Thus, by using four of these converters feeding the 32-bit PCI data bus, up to 8 camera heads can be used with a single PCI card, and four camera heads can be operated in parallel. The card has been designed so that any of 8 different CCD types can be used with it (6 monochrome and 2 color CCDs) ranging in resolution from 192 by 165 pixels up to 1134 by 972 pixels. In the area of software, a method has been developed to better utilize the decision-making capability of the computer along with the sub-array scan capabilities of many CCDs. Specifically, it is shown below how to achieve a dual scan mode camera system wherein one scan mode is a low density, high speed scan of a complete image area, and a higher density sub-array scan is used in those areas where changes have been observed. The name given to this technique is adaptive sub-array scanning.

  20. ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS

    SciTech Connect

    Goodarz Ahmadi

    2001-10-01

    In the second year of the project, the Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column is further developed. The approach uses an Eulerian analysis of liquid flows in the bubble column, and makes use of the Lagrangian trajectory analysis for the bubbles and particle motions. An experimental set for studying a two-dimensional bubble column is also developed. The operation of the bubble column is being tested and diagnostic methodology for quantitative measurements is being developed. An Eulerian computational model for the flow condition in the two-dimensional bubble column is also being developed. The liquid and bubble motions are being analyzed and the results are being compared with the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures is also being studied. Further progress was also made in developing a thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion. The balance laws are obtained and the constitutive laws are being developed. Progress was also made in measuring concentration and velocity of particles of different sizes near a wall in a duct flow. The technique of Phase-Doppler anemometry was used in these studies. The general objective of this project is to provide the needed fundamental understanding of three-phase slurry reactors in Fischer-Tropsch (F-T) liquid fuel synthesis. The other main goal is to develop a computational capability for predicting the transport and processing of three-phase coal slurries. The specific objectives are: (1) To develop a thermodynamically consistent rate-dependent anisotropic model for multiphase slurry flows with and without chemical reaction for application to coal liquefaction. Also establish the

  1. Activities and operations of the Advanced Computing Research Facility, July-October 1986

    SciTech Connect

    Pieper, G.W.

    1986-01-01

    Research activities and operations of the Advanced Computing Research Facility (ACRF) at Argonne National Laboratory are discussed for the period from July 1986 through October 1986. The facility is currently supported by the Department of Energy, and is operated by the Mathematics and Computer Science Division at Argonne. Over the past four-month period, a new commercial multiprocessor, the Intel iPSC-VX/d4 hypercube was installed. In addition, four other commercial multiprocessors continue to be available for research - an Encore Multimax, a Sequent Balance 21000, an Alliant FX/8, and an Intel iPSC/d5 - as well as a locally designed multiprocessor, the Lemur. These machines are being actively used by scientists at Argonne and throughout the nation in a wide variety of projects concerning computer systems with parallel and vector architectures. A variety of classes, workshops, and seminars have been sponsored to train researchers on computing techniques for the advanced computer systems at the Advanced Computing Research Facility. For example, courses were offered on writing programs for parallel computer systems and hosted the first annual Alliant users group meeting. A Sequent users group meeting and a two-day workshop on performance evaluation of parallel computers and programs are being organized.

  2. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    SciTech Connect

    Hules, J.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  3. Experimental and computing strategies in advanced material characterization problems

    SciTech Connect

    Bolzon, G.

    2015-10-28

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

  4. Experimental and computing strategies in advanced material characterization problems

    NASA Astrophysics Data System (ADS)

    Bolzon, G.

    2015-10-01

    The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

  5. Strategies for casualty mitigation programs by using advanced tsunami computation

    NASA Astrophysics Data System (ADS)

    IMAI, K.; Imamura, F.

    2012-12-01

    1. Purpose of the study In this study, based on the scenario of great earthquakes along the Nankai trough, we aim on the estimation of the run up and high accuracy inundation process of tsunami in coastal areas including rivers. Here, using a practical method of tsunami analytical model, and taking into account characteristics of detail topography, land use and climate change in a realistic present and expected future environment, we examined the run up and tsunami inundation process. Using these results we estimated the damage due to tsunami and obtained information for the mitigation of human casualties. Considering the time series from the occurrence of the earthquake and the risk of tsunami damage, in order to mitigate casualties we provide contents of disaster risk information displayed in a tsunami hazard and risk map. 2. Creating a tsunami hazard and risk map From the analytical and practical tsunami model (a long wave approximated model) and the high resolution topography (5 m) including detailed data of shoreline, rivers, building and houses, we present a advanced analysis of tsunami inundation considering the land use. Based on the results of tsunami inundation and its analysis; it is possible to draw a tsunami hazard and risk map with information of human casualty, building damage estimation, drift of vehicles, etc. 3. Contents of disaster prevention information To improve the hazard, risk and evacuation information distribution, it is necessary to follow three steps. (1) Provide basic information such as tsunami attack info, areas and routes for evacuation and location of tsunami evacuation facilities. (2) Provide as additional information the time when inundation starts, the actual results of inundation, location of facilities with hazard materials, presence or absence of public facilities and areas underground that required evacuation. (3) Provide information to support disaster response such as infrastructure and traffic network damage prediction

  6. CART V: recent advancements in computer-aided camouflage assessment

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Müller, Markus

    2011-05-01

    In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the object's movement conspicuity. In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image data that can show noise or image artefacts. The presented methods fathom the correlations between image processing and camouflage assessment. A novel algorithm is presented based on template matching to assess the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the influence of object movement to a camouflage effect in different environments. As the results show, the presented methods contribute to a significant benefit in the field of camouflage assessment.

  7. ADVANCED METHODS FOR THE COMPUTATION OF PARTICLE BEAM TRANSPORT AND THE COMPUTATION OF ELECTROMAGNETIC FIELDS AND MULTIPARTICLE PHENOMENA

    SciTech Connect

    Alex J. Dragt

    2012-08-31

    Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

  8. ADVANCED COMPUTATIONAL MODEL FOR THREE-PHASE SLURRY REACTORS

    SciTech Connect

    Goodarz Ahmadi

    2004-10-01

    In this project, an Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column was developed. The approach used an Eulerian analysis of liquid flows in the bubble column, and made use of the Lagrangian trajectory analysis for the bubbles and particle motions. The bubble-bubble and particle-particle collisions are included the model. The model predictions are compared with the experimental data and good agreement was found An experimental setup for studying two-dimensional bubble columns was developed. The multiphase flow conditions in the bubble column were measured using optical image processing and Particle Image Velocimetry techniques (PIV). A simple shear flow device for bubble motion in a constant shear flow field was also developed. The flow conditions in simple shear flow device were studied using PIV method. Concentration and velocity of particles of different sizes near a wall in a duct flow was also measured. The technique of Phase-Doppler anemometry was used in these studies. An Eulerian volume of fluid (VOF) computational model for the flow condition in the two-dimensional bubble column was also developed. The liquid and bubble motions were analyzed and the results were compared with observed flow patterns in the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were also analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures were also studied. The simulation results were compared with the experimental data and discussed A thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion was developed. The balance laws were obtained and the constitutive laws established.

  9. WAATS: A computer program for Weights Analysis of Advanced Transportation Systems

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.

    1974-01-01

    A historical weight estimating technique for advanced transportation systems is presented. The classical approach to weight estimation is discussed and sufficient data is presented to estimate weights for a large spectrum of flight vehicles including horizontal and vertical takeoff aircraft, boosters and reentry vehicles. A computer program, WAATS (Weights Analysis for Advanced Transportation Systems) embracing the techniques discussed has been written and user instructions are presented. The program was developed for use in the ODIN (Optimal Design Integration System) system.

  10. Advanced entry guidance algorithm with landing footprint computation

    NASA Astrophysics Data System (ADS)

    Leavitt, James Aaron

    -determined angle of attack profile. The method is also capable of producing orbital footprints using an automatically-generated set of angle of attack profiles of varying range, with the lowest profile designed for near-maximum range in the absence of an active heat load constraint. The accuracy of the footprint method is demonstrated by direct comparison with footprints computed independently by an optimization program.

  11. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    SciTech Connect

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  12. Final Report: SciDAC Computational Astrophysics Consortium (at Princeton University)

    SciTech Connect

    Burrows, Adam

    2012-12-03

    Supernova explosions are the central events in astrophysics. They are the major agencies of change in the interstellar medium, driving star formation and the evolution of galaxies. Their gas remnants are the birthplaces of the cosmic rays. Such is their brightness that they can be used as standard candles to measure the size and geometry of the universe and their investigation draws on particle and nuclear physics, radiative transfer, kinetic theory, gravitational physics, thermodynamics, and the numerical arts. Hence, supernovae are unrivaled astrophysical laboratories. We will develop new state-of-the-art multi-dimensional radiation hydrodynamic codes to address this and other related astrophysical phenomena.

  13. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    NASA Technical Reports Server (NTRS)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  14. Research in Computational Aeroscience Applications Implemented on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Wigton, Larry

    1996-01-01

    Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.

  15. Impact of computer advances on future finite elements computations. [for aircraft and spacecraft design

    NASA Technical Reports Server (NTRS)

    Fulton, Robert E.

    1985-01-01

    Research performed over the past 10 years in engineering data base management and parallel computing is discussed, and certain opportunities for research toward the next generation of structural analysis capability are proposed. Particular attention is given to data base management associated with the IPAD project and parallel processing associated with the Finite Element Machine project, both sponsored by NASA, and a near term strategy for a distributed structural analysis capability based on relational data base management software and parallel computers for a future structural analysis system.

  16. 5 CFR 550.404 - Computation of advance payments and evacuation payments; time periods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... evacuation payments; time periods. 550.404 Section 550.404 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Payments During Evacuation § 550.404 Computation of advance payments and evacuation payments; time periods. (a) Payments shall be based on the...

  17. 5 CFR 550.404 - Computation of advance payments and evacuation payments; time periods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... evacuation payments; time periods. 550.404 Section 550.404 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Payments During Evacuation § 550.404 Computation of advance payments and evacuation payments; time periods. (a) Payments shall be based on the...

  18. Computers-for-edu: An Advanced Business Application Programming (ABAP) Teaching Case

    ERIC Educational Resources Information Center

    Boyle, Todd A.

    2007-01-01

    The "Computers-for-edu" case is designed to provide students with hands-on exposure to creating Advanced Business Application Programming (ABAP) reports and dialogue programs, as well as navigating various mySAP Enterprise Resource Planning (ERP) transactions needed by ABAP developers. The case requires students to apply a wide variety of ABAP…

  19. COMPUTATIONAL TOXICOLOGY ADVANCES: EMERGING CAPABILITIES FOR DATA EXPLORATION AND SAR MODEL DEVELOPMENT

    EPA Science Inventory

    Computational Toxicology Advances: Emerging capabilities for data exploration and SAR model development
    Ann M. Richard and ClarLynda R. Williams, National Health & Environmental Effects Research Laboratory, US EPA, Research Triangle Park, NC, USA; email: richard.ann@epa.gov

  20. Advanced Telecommunications and Computer Technologies in Georgia Public Elementary School Library Media Centers.

    ERIC Educational Resources Information Center

    Rogers, Jackie L.

    The purpose of this study was to determine what recent progress had been made in Georgia public elementary school library media centers regarding access to advanced telecommunications and computer technologies as a result of special funding. A questionnaire addressed the following areas: automation and networking of the school library media center…

  1. PARTNERING WITH DOE TO APPLY ADVANCED BIOLOGICAL, ENVIRONMENTAL, AND COMPUTATIONAL SCIENCE TO ENVIRONMENTAL ISSUES

    EPA Science Inventory

    On February 18, 2004, the U.S. Environmental Protection Agency and Department of Energy signed a Memorandum of Understanding to expand the research collaboration of both agencies to advance biological, environmental, and computational sciences for protecting human health and the ...

  2. SciDAC's Earth System Grid Center for Enabling Technologies Semi-Annual Progress Report for the Period April 1, 2009 through September 30, 2009

    SciTech Connect

    Williams, Dean N.; Foster, I. T.; Middleton, D. E.

    2009-10-15

    This report summarizes work carried out by the ESG-CET during the period April 1, 2009 through September 30, 2009. It includes discussion of highlights, overall progress, period goals, collaborations, papers, and presentations. To learn more about our project, and to find previous reports, please visit the Earth System Grid Center for Enabling Technologies (ESG-CET) website. This report will be forwarded to the DOE SciDAC program management, the Office of Biological and Environmental Research (OBER) program management, national and international collaborators and stakeholders (e.g., the Community Climate System Model (CCSM), the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5), the Climate Science Computational End Station (CCES), the SciDAC II: A Scalable and Extensible Earth System Model for Climate Change Science, the North American Regional Climate Change Assessment Program (NARCCAP), and other wide-ranging climate model evaluation activities). During this semi-annual reporting period, the ESG-CET team continued its efforts to complete software components needed for the ESG Gateway and Data Node. These components include: Data Versioning, Data Replication, DataMover-Lite (DML) and Bulk Data Mover (BDM), Metrics, Product Services, and Security, all joining together to form ESG-CET's first beta release. The launch of the beta release is scheduled for late October with the installation of ESG Gateways at NCAR and LLNL/PCMDI. Using the developed ESG Data Publisher, the ESG II CMIP3 (IPCC AR4) data holdings - approximately 35 TB - will be among the first datasets to be published into the new ESG enterprise system. In addition, the NCAR's ESG II data holdings will also be published into the new system - approximately 200 TB. This period also saw the testing of the ESG Data Node at various collaboration sites, including: the British Atmospheric Data Center (BADC), the Max-Planck-Institute for Meteorology, the University of Tokyo Center for

  3. A first attempt to bring computational biology into advanced high school biology classrooms.

    PubMed

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S

    2011-10-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  4. Building an advanced climate model: Program plan for the CHAMMP (Computer Hardware, Advanced Mathematics, and Model Physics) Climate Modeling Program

    SciTech Connect

    Not Available

    1990-12-01

    The issue of global warming and related climatic changes from increasing concentrations of greenhouse gases in the atmosphere has received prominent attention during the past few years. The Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP) Climate Modeling Program is designed to contribute directly to this rapid improvement. The goal of the CHAMMP Climate Modeling Program is to develop, verify, and apply a new generation of climate models within a coordinated framework that incorporates the best available scientific and numerical approaches to represent physical, biogeochemical, and ecological processes, that fully utilizes the hardware and software capabilities of new computer architectures, that probes the limits of climate predictability, and finally that can be used to address the challenging problem of understanding the greenhouse climate issue through the ability of the models to simulate time-dependent climatic changes over extended times and with regional resolution.

  5. FY05-FY06 Advanced Simulation and Computing Implementation Plan, Volume 2

    SciTech Connect

    Baron, A L

    2004-07-19

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

  6. PMEL contributions to the collaboration: SCALING THE EARTH SYSTEM GRID TO PETASCALE DATA for the DOE SciDACs Earth System Grid Center for Enabling Technologies

    SciTech Connect

    Hankin, Steve

    2012-06-01

    Drawing to a close after five years of funding from DOE's ASCR and BER program offices, the SciDAC-2 project called the Earth System Grid (ESG) Center for Enabling Technologies has successfully established a new capability for serving data from distributed centers. The system enables users to access, analyze, and visualize data using a globally federated collection of networks, computers and software. The ESG software now known as the Earth System Grid Federation (ESGF) has attracted a broad developer base and has been widely adopted so that it is now being utilized in serving the most comprehensive multi-model climate data sets in the world. The system is used to support international climate model intercomparison activities as well as high profile U.S. DOE, NOAA, NASA, and NSF projects. It currently provides more than 25,000 users access to more than half a petabyte of climate data (from models and from observations) and has enabled over a 1,000 scientific publications.

  7. Advances in computer-aided design and computer-aided manufacture technology.

    PubMed

    Calamia, J R

    1996-01-01

    Although the development of computer-aided design (CAD) and computer-aided manufacture (CAM) technology and the benefits of increased productivity became obvious in the automobile and aerospace industries in the 1970s, investigations of this technology's application in the field of dentistry did not begin until the 1980s. Only now are we beginning to see the fruits of this work with the commercial availability of some systems; the potential for this technology seems boundless. This article reviews the recent literature with emphasis on the period from June 1992 to May 1993. This review should familiarize the reader with some of the latest developments in this technology, including a brief description of some systems currently available and the clinical and economical rationale for their acceptance into the dental mainstream. This article concentrates on a particular system, the Cerec (Siemens/Pelton and Crane, Charlotte, NC) system, for three reasons: First, this system has been available since 1985 and, as a result, has a track record of almost 7 years of data. Most of the data have just recently been released and consequently, much of this year's literature on CAD-CAM is monopolized by studies using this system. Second, this system was developed as a mobile, affordable, direct chairside CAD-CAM restorative method. As such, it is of special interest to the patient, providing a one-visit restoration. Third, the author is currently engaged in research using this particular system and has a working knowledge of this system's capabilities.

  8. Lambda Station: Alternate network path forwarding for production SciDAC applications

    SciTech Connect

    Grigoriev, Maxim; Bobyshev, Andrey; Crawford, Matt; DeMar, Phil; Grigaliunas, Vyto; Moibenko, Alexander; Petravick, Don; Newman, Harvey; Steenberg, Conrad; Thomas, Michael; /Caltech

    2007-09-01

    The LHC era will start very soon, creating immense data volumes capable of demanding allocation of an entire network circuit for task-driven applications. Circuit-based alternate network paths are one solution to meeting the LHC high bandwidth network requirements. The Lambda Station project is aimed at addressing growing requirements for dynamic allocation of alternate network paths. Lambda Station facilitates the rerouting of designated traffic through site LAN infrastructure onto so-called 'high-impact' wide-area networks. The prototype Lambda Station developed with Service Oriented Architecture (SOA) approach in mind will be presented. Lambda Station has been successfully integrated into the production version of the Storage Resource Manager (SRM), and deployed at US CMS Tier1 center at Fermilab, as well as at US-CMS Tier-2 site at Caltech. This paper will discuss experiences using the prototype system with production SciDAC applications for data movement between Fermilab and Caltech. The architecture and design principles of the production version Lambda Station software, currently being implemented as Java based web services, will also be presented in this paper.

  9. Advances in computer-aided design and computer-aided manufacture technology.

    PubMed

    Calamia, J R

    1994-01-01

    Although the development of computer-aided design (CAD) and computer-aided manufacture (CAM) technology and the benefits of increased productivity became obvious in the automobile and aerospace industries in the 1970s, investigations of this technology's application in the field of dentistry did not begin until the 1980s. Only now are we beginning to see the fruits of this work with the commercial availability of some systems; the potential for this technology seems boundless. This article reviews the recent literature with emphasis on the period from June 1992 to May 1993. This review should familiarize the reader with some of the latest developments in this technology, including a brief description of some systems currently available and the clinical and economical rationale for their acceptance into the dental mainstream. This article concentrates on a particular system, the Cerec (Siemens/Pelton and Crane, Charlotte, NC) system, for three reasons: first, this system has been available since 1985 and, as a result, has a track record of almost 7 years of data. Most of the data have just recently been released and consequently, much of this year's literature on CAD-CAM is monopolized by studies using this system. Second, this system was developed as a mobile, affordable, direct chairside CAD-CAM restorative method. As such, it is of special interest to the dentist who will offer this new technology directly to the patient, providing a one-visit restoration. Third, the author is currently engaged in research using this particular system and has a working knowledge of this system's capabilities.

  10. Computation of Loads on the McDonnell Douglas Advanced Bearingless Rotor

    NASA Technical Reports Server (NTRS)

    Nguyen, Khanh; Lauzon, Dan; Anand, Vaidyanathan

    1994-01-01

    Computed results from UMARC and DART analyses are compared with the blade bending moments and vibratory hub loads data obtained from a full-scale wind tunnel test of the McDonnell Douglas five-bladed advanced bearingless rotor. The 5 per-rev vibratory hub loads data are corrected using results from a dynamic calibration of the rotor balance. The comparison between UMARC computed blade bending moments at different flight conditions are poor to fair, while DART results are fair to good. Using the free wake module, UMARC adequately computes the 5P vibratory hub loads for this rotor, capturing both magnitude and variations with forward speed. DART employs a uniform inflow wake model and does not adequately compute the 5P vibratory hub loads for this rotor.

  11. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    McCoy, Michel; Archer, Bill; Matzen, M. Keith

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  12. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    NASA Astrophysics Data System (ADS)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  13. Recovery Act: Advanced Interaction, Computation, and Visualization Tools for Sustainable Building Design

    SciTech Connect

    Greenberg, Donald P.; Hencey, Brandon M.

    2013-08-20

    Current building energy simulation technology requires excessive labor, time and expertise to create building energy models, excessive computational time for accurate simulations and difficulties with the interpretation of the results. These deficiencies can be ameliorated using modern graphical user interfaces and algorithms which take advantage of modern computer architectures and display capabilities. To prove this hypothesis, we developed an experimental test bed for building energy simulation. This novel test bed environment offers an easy-to-use interactive graphical interface, provides access to innovative simulation modules that run at accelerated computational speeds, and presents new graphics visualization methods to interpret simulation results. Our system offers the promise of dramatic ease of use in comparison with currently available building energy simulation tools. Its modular structure makes it suitable for early stage building design, as a research platform for the investigation of new simulation methods, and as a tool for teaching concepts of sustainable design. Improvements in the accuracy and execution speed of many of the simulation modules are based on the modification of advanced computer graphics rendering algorithms. Significant performance improvements are demonstrated in several computationally expensive energy simulation modules. The incorporation of these modern graphical techniques should advance the state of the art in the domain of whole building energy analysis and building performance simulation, particularly at the conceptual design stage when decisions have the greatest impact. More importantly, these better simulation tools will enable the transition from prescriptive to performative energy codes, resulting in better, more efficient designs for our future built environment.

  14. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    SciTech Connect

    Lucas, Robert; Ang, James; Bergman, Keren; Borkar, Shekhar; Carlson, William; Carrington, Laura; Chiu, George; Colwell, Robert; Dally, William; Dongarra, Jack; Geist, Al; Haring, Rud; Hittinger, Jeffrey; Hoisie, Adolfy; Klein, Dean Micron; Kogge, Peter; Lethin, Richard; Sarkar, Vivek; Schreiber, Robert; Shalf, John; Sterling, Thomas; Stevens, Rick; Bashor, Jon; Brightwell, Ron; Coteus, Paul; Debenedictus, Erik; Hiller, Jon; Kim, K. H.; Langston, Harper; Murphy, Richard Micron; Webster, Clayton; Wild, Stefan; Grider, Gary; Ross, Rob; Leyffer, Sven; Laros III, James

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a system that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.

  15. A comparison of computer architectures for the NASA demonstration advanced avionics system

    NASA Technical Reports Server (NTRS)

    Seacord, C. L.; Bailey, D. G.; Larson, J. C.

    1979-01-01

    The paper compares computer architectures for the NASA demonstration advanced avionics system. Two computer architectures are described with an unusual approach to fault tolerance: a single spare processor can correct for faults in any of the distributed processors by taking on the role of a failed module. It was shown the system must be used from a functional point of view to properly apply redundancy and achieve fault tolerance and ultra reliability. Data are presented on complexity and mission failure probability which show that the revised version offers equivalent mission reliability at lower cost as measured by hardware and software complexity.

  16. Robotics, Stem Cells and Brain Computer Interfaces in Rehabilitation and Recovery from Stroke; Updates and Advances

    PubMed Central

    Boninger, Michael L; Wechsler, Lawrence R.; Stein, Joel

    2014-01-01

    Objective To describe the current state and latest advances in robotics, stem cells, and brain computer interfaces in rehabilitation and recovery for stroke. Design The authors of this summary recently reviewed this work as part of a national presentation. The paper represents the information included in each area. Results Each area has seen great advances and challenges as products move to market and experiments are ongoing. Conclusion Robotics, stem cells, and brain computer interfaces all have tremendous potential to reduce disability and lead to better outcomes for patients with stroke. Continued research and investment will be needed as the field moves forward. With this investment, the potential for recovery of function is likely substantial PMID:25313662

  17. Computational methods in the prediction of advanced subsonic and supersonic propeller induced noise: ASSPIN users' manual

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Tarkenton, G. M.

    1992-01-01

    This document describes the computational aspects of propeller noise prediction in the time domain and the use of high speed propeller noise prediction program ASSPIN (Advanced Subsonic and Supersonic Propeller Induced Noise). These formulations are valid in both the near and far fields. Two formulations are utilized by ASSPIN: (1) one is used for subsonic portions of the propeller blade; and (2) the second is used for transonic and supersonic regions on the blade. Switching between the two formulations is done automatically. ASSPIN incorporates advanced blade geometry and surface pressure modelling, adaptive observer time grid strategies, and contains enhanced numerical algorithms that result in reduced computational time. In addition, the ability to treat the nonaxial inflow case has been included.

  18. Advances in Single-Photon Emission Computed Tomography Hardware and Software.

    PubMed

    Piccinelli, Marina; Garcia, Ernest V

    2016-02-01

    Nuclear imaging techniques remain today's most reliable modality for the assessment and quantification of myocardial perfusion. In recent years, the field has experienced tremendous progress both in terms of dedicated cameras for cardiac applications and software techniques for image reconstruction. The most recent advances in single-photon emission computed tomography hardware and software are reviewed, focusing on how these improvements have resulted in an even more powerful diagnostic tool with reduced injected radiation dose and acquisition time.

  19. Recent Advances in Cardiac Computed Tomography: Dual Energy, Spectral and Molecular CT Imaging

    PubMed Central

    Danad, Ibrahim; Fayad, Zahi A.; Willemink, Martin J.; Min, James K.

    2015-01-01

    Computed tomography (CT) evolved into a powerful diagnostic tool and it is impossible to imagine current clinical practice without CT imaging. Due to its widespread availability, ease of clinical application, superb sensitivity for detection of CAD, and non-invasive nature, CT has become a valuable tool within the armamentarium of the cardiologist. In the last few years, numerous technological advances in CT have occurred—including dual energy CT (DECT), spectral CT and CT-based molecular imaging. By harnessing the advances in technology, cardiac CT has advanced beyond the mere evaluation of coronary stenosis to an imaging modality tool that permits accurate plaque characterization, assessment of myocardial perfusion and even probing of molecular processes that are involved in coronary atherosclerosis. Novel innovations in CT contrast agents and pre-clinical spectral CT devices have paved the way for CT-based molecular imaging. PMID:26068288

  20. Condition monitoring through advanced sensor and computational technology : final report (January 2002 to May 2005).

    SciTech Connect

    Kim, Jung-Taek; Luk, Vincent K.

    2005-05-01

    The overall goal of this joint research project was to develop and demonstrate advanced sensors and computational technology for continuous monitoring of the condition of components, structures, and systems in advanced and next-generation nuclear power plants (NPPs). This project included investigating and adapting several advanced sensor technologies from Korean and US national laboratory research communities, some of which were developed and applied in non-nuclear industries. The project team investigated and developed sophisticated signal processing, noise reduction, and pattern recognition techniques and algorithms. The researchers installed sensors and conducted condition monitoring tests on two test loops, a check valve (an active component) and a piping elbow (a passive component), to demonstrate the feasibility of using advanced sensors and computational technology to achieve the project goal. Acoustic emission (AE) devices, optical fiber sensors, accelerometers, and ultrasonic transducers (UTs) were used to detect mechanical vibratory response of check valve and piping elbow in normal and degraded configurations. Chemical sensors were also installed to monitor the water chemistry in the piping elbow test loop. Analysis results of processed sensor data indicate that it is feasible to differentiate between the normal and degraded (with selected degradation mechanisms) configurations of these two components from the acquired sensor signals, but it is questionable that these methods can reliably identify the level and type of degradation. Additional research and development efforts are needed to refine the differentiation techniques and to reduce the level of uncertainties.

  1. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging.

  2. Research Institute for Advanced Computer Science: Annual Report October 1998 through September 1999

    NASA Technical Reports Server (NTRS)

    Leiner, Barry M.; Gross, Anthony R. (Technical Monitor)

    1999-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center (ARC). It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. ARC has been designated NASA's Center of Excellence in Information Technology. In this capacity, ARC is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA ARC and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to

  3. SciDAC Visualization and Analytics Center for EnablingTechnologies

    SciTech Connect

    Bethel, E. Wes; Johnson, Chris; Joy, Ken; Ahern, Sean; Pascucci,Valerio; Childs, Hank; Cohen, Jonathan; Duchaineau, Mark; Hamann, Bernd; Hansen, Charles; Laney, Dan; Lindstrom, Peter; Meredith, Jermey; Ostrouchov, George; Parker, Steven; Silva, Claudio; Sanderson, Allen; Tricoche, Xavier.

    2007-06-30

    The Visualization and Analytics Center for EnablingTechnologies (VACET) focuses on leveraging scientific visualization andanalytics software technology as an enabling technology for increasingscientific productivity and insight. Advances in computational technologyhave resulted in an 'information big bang,' which in turn has created asignificant data understanding challenge. This challenge is widelyacknowledged to be one of the primary bottlenecks in contemporaryscience. The vision of VACET is to adapt, extend, create when necessary,and deploy visual data analysis solutions that are responsive to theneeds of DOE'scomputational and experimental scientists. Our center isengineered to be directly responsive to those needs and to deliversolutions for use in DOE's large open computing facilities. The researchand development directly target data understanding problems provided byour scientific application stakeholders. VACET draws from a diverse setof visualization technology ranging from production quality applicationsand application frameworks to state-of-the-art algorithms forvisualization, analysis, analytics, data manipulation, and datamanagement.

  4. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions

    SciTech Connect

    Dragt, A.J.; Gluckstern, R.L.

    1990-11-01

    The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high behavior of longitudinal and transverse coupling impendances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides.

  5. A computational study of advanced exhaust system transition ducts with experimental validation

    NASA Technical Reports Server (NTRS)

    Wu, C.; Farokhi, S.; Taghavi, R.

    1992-01-01

    The current study is an application of CFD to a 'real' design and analysis environment. A subsonic, three-dimensional parabolized Navier-Stokes (PNS) code is used to construct stall margin design charts for optimum-length advanced exhaust systems' circular-to-rectangular transition ducts. Computer code validation has been conducted to examine the capability of wall static pressure predictions. The comparison of measured and computed wall static pressures indicates a reasonable accuracy of the PNS computer code results. Computations have also been conducted on 15 transition ducts, three area ratios, and five aspect ratios. The three area ratios investigated are constant area ratio of unity, moderate contracting area ratio of 0.8, and highly contracting area ratio of 0.5. The degree of mean flow acceleration is identified as a dominant parameter in establishing the minimum duct length requirement. The effect of increasing aspect ratio in the minimum length transition duct is to increase the length requirement, as well as to increase the mass-averaged total pressure losses. The design guidelines constructed from this investigation may aid in the design and manufacture of advanced exhaust systems for modern fighter aircraft.

  6. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    SciTech Connect

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  7. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    NASA Astrophysics Data System (ADS)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  8. CRADA ORNL 91-0046B final report: Assessment of IBM advanced computing architectures

    SciTech Connect

    Geist, G.A.

    1996-02-01

    This was a Cooperative Research and Development Agreement (CRADA) with IBM to assess their advanced computer architectures. Over the course of this project three different architectures were evaluated. The POWER/4 RIOS1 based shared memory multiprocessor, the POWER/2 RIOS2 based high performance workstation, and the J30 PowerPC based shared memory multiprocessor. In addition to this hardware several software packages where beta tested for IBM including: ESSO scientific computing library, nv video-conferencing package, Ultimedia multimedia display environment, FORTRAN 90 and C++ compilers, and the AIX 4.1 operating system. Both IBM and ORNL benefited from the research performed in this project and even though access to the POWER/4 computer was delayed several months, all milestones were met.

  9. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    SciTech Connect

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  10. Advanced Simulation & Computing FY09-FY10 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Meisner, R; Perry, J; McCoy, M; Hopson, J

    2008-04-30

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  11. Advanced Simulation and Computing FY10-11 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Carnes, B

    2009-06-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  12. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    SciTech Connect

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-04-25

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  13. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    SciTech Connect

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  14. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  15. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  16. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    SciTech Connect

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  17. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  18. Further advancements for large area-detector based computed tomography system

    SciTech Connect

    Davis, A. W.; Keating, S. C.; Claytor, T. N.

    2001-01-01

    We present advancements made to a large area-detector based system for industrial x-ray computed tomography. Past performance improvements in data acquisition speeds were made by use of high-resolution large area, flat-panel amorphous-silicon (a-Si) detectors. The detectors have proven, over several years, to be a robust alternative to CCD-optics and image intensifier CT systems. These detectors also provide the advantage of area detection as compared with the single slice geometry of linear array systems. New advancements in this system include parallel processing of sinogram reconstructions, improved visualization software and migration to frame-rate a-Si detectors. Parallel processing provides significant speed improvements for data reconstruction, and is implemented for parallel-beam, fan-beam and Feldkamp cone-beam reconstruction algorithms. Reconstruction times are reduced by an order of magnitude by use of a cluster of ten or more equal-speed computers. Advancements in data visualization are made through interactive software, which allows interrogation of the full three-dimensional dataset. Inspection examples presented in this paper include an electromechanical device, a nonliving biological specimen and a press-cast plastic specimen. We also present a commonplace item for the benefit of the layperson.

  19. Sustaining and Extending the Open Science Grid: Science Innovation on a PetaScale Nationwide Facility (DE-FC02-06ER41436) SciDAC-2 Closeout Report

    SciTech Connect

    Livny, Miron; Shank, James; Ernst, Michael; Blackburn, Kent; Goasguen, Sebastien; Tuts, Michael; Gibbons, Lawrence; Pordes, Ruth; Sliz, Piotr; Deelman, Ewa; Barnett, William; Olson, Doug; McGee, John; Cowles, Robert; Wuerthwein, Frank; Gardner, Robert; Avery, Paul; Wang, Shaowen; Lincoln, David Swanson

    2015-02-11

    Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. We operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.

  20. Recent advances in computer-aided drug design as applied to anti-influenza drug discovery.

    PubMed

    Mallipeddi, Prema L; Kumar, Gyanendra; White, Stephen W; Webb, Thomas R

    2014-01-01

    Influenza is a seasonal and serious health threat, and the recent outbreak of H7N9 following the pandemic spread of H1N1 in 2009 has served to emphasize the importance of anti-influenza drug discovery. Zanamivir (Relenza™) and oseltamivir (Tamiflu(®)) are two antiviral drugs currently recommended by the CDC for treating influenza. Both are examples of the successful application of structure-based drug design strategies. These strategies have combined computer- based approaches, such as docking- and pharmacophore-based virtual screening with X-ray crystallographic structural analyses. Docking is a routinely used computational method to identify potential hits from large compound libraries. This method has evolved from simple rigid docking approaches to flexible docking methods to handle receptor flexibility and to enhance hit rates in virtual screening. Virtual screening approaches can employ both ligand-based and structurebased pharmacophore models depending on the available information. The exponential growth in computing power has increasingly facilitated the application of computer-aided methods in drug discovery, and they now play significant roles in the search for novel therapeutics. An overview of these computational tools is presented in this review, and recent advances and challenges will be discussed. The focus of the review will be anti-influenza drug discovery and how advances in our understanding of viral biology have led to the discovery of novel influenza protein targets. Also discussed will be strategies to circumvent the problem of resistance emerging from rapid mutations that has seriously compromised the efficacy of current anti-influenza therapies.

  1. An expanded framework for the advanced computational testing and simulation toolkit

    SciTech Connect

    Marques, Osni A.; Drummond, Leroy A.

    2003-11-09

    The Advanced Computational Testing and Simulation (ACTS) Toolkit is a set of computational tools developed primarily at DOE laboratories and is aimed at simplifying the solution of common and important computational problems. The use of the tools reduces the development time for new codes and the tools provide functionality that might not otherwise be available. This document outlines an agenda for expanding the scope of the ACTS Project based on lessons learned from current activities. Highlights of this agenda include peer-reviewed certification of new tools; finding tools to solve problems that are not currently addressed by the Toolkit; working in collaboration with other software initiatives and DOE computer facilities; expanding outreach efforts; promoting interoperability, further development of the tools; and improving functionality of the ACTS Information Center, among other tasks. The ultimate goal is to make the ACTS tools more widely used and more effective in solving DOE's and the nation's scientific problems through the creation of a reliable software infrastructure for scientific computing.

  2. High Resolution Traction Force Microscopy Based on Experimental and Computational Advances

    PubMed Central

    Sabass, Benedikt; Gardel, Margaret L.; Waterman, Clare M.; Schwarz, Ulrich S.

    2008-01-01

    Cell adhesion and migration crucially depend on the transmission of actomyosin-generated forces through sites of focal adhesion to the extracellular matrix. Here we report experimental and computational advances in improving the resolution and reliability of traction force microscopy. First, we introduce the use of two differently colored nanobeads as fiducial markers in polyacrylamide gels and explain how the displacement field can be computationally extracted from the fluorescence data. Second, we present different improvements regarding standard methods for force reconstruction from the displacement field, which are the boundary element method, Fourier-transform traction cytometry, and traction reconstruction with point forces. Using extensive data simulation, we show that the spatial resolution of the boundary element method can be improved considerably by splitting the elastic field into near, intermediate, and far field. Fourier-transform traction cytometry requires considerably less computer time, but can achieve a comparable resolution only when combined with Wiener filtering or appropriate regularization schemes. Both methods tend to underestimate forces, especially at small adhesion sites. Traction reconstruction with point forces does not suffer from this limitation, but is only applicable with stationary and well-developed adhesion sites. Third, we combine these advances and for the first time reconstruct fibroblast traction with a spatial resolution of ∼1 μm. PMID:17827246

  3. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  4. Advanced computer techniques for inverse modeling of electric current in cardiac tissue

    SciTech Connect

    Hutchinson, S.A.; Romero, L.A.; Diegert, C.F.

    1996-08-01

    For many years, ECG`s and vector cardiograms have been the tools of choice for non-invasive diagnosis of cardiac conduction problems, such as found in reentrant tachycardia or Wolff-Parkinson-White (WPW) syndrome. Through skillful analysis of these skin-surface measurements of cardiac generated electric currents, a physician can deduce the general location of heart conduction irregularities. Using a combination of high-fidelity geometry modeling, advanced mathematical algorithms and massively parallel computing, Sandia`s approach would provide much more accurate information and thus allow the physician to pinpoint the source of an arrhythmia or abnormal conduction pathway.

  5. Unenhanced CT in the evaluation of urinary calculi: application of advanced computer methods.

    PubMed

    Olcott, E W; Sommer, F G

    1999-04-01

    Recent advances in computer hardware and software technology enable radiologists to examine tissues and structures using three-dimensional figures constructed from the multiple planar images acquired during a spiral CT examination. Three-dimensional CT techniques permit the linear dimensions of renal calculi to be determined along all three coordinate axes with a high degree of accuracy and enable direct volumetric analysis of calculi, yielding information that is not available from any other diagnostic modality. Additionally, three-dimensional techniques can help to identify and localize calculi in patients with suspected urinary colic.

  6. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information

    SciTech Connect

    Hey, Tony; Agarwal, Deborah; Borgman, Christine; Cartaro, Concetta; Crivelli, Silvia; Van Dam, Kerstin Kleese; Luce, Richard; Arjun, Shankar; Trefethen, Anne; Wade, Alex; Williams, Dean

    2015-09-04

    The Advanced Scientific Computing Advisory Committee (ASCAC) was charged to form a standing subcommittee to review the Department of Energy’s Office of Scientific and Technical Information (OSTI) and to begin by assessing the quality and effectiveness of OSTI’s recent and current products and services and to comment on its mission and future directions in the rapidly changing environment for scientific publication and data. The Committee met with OSTI staff and reviewed available products, services and other materials. This report summaries their initial findings and recommendations.

  7. Advanced Computing Technologies for Rocket Engine Propulsion Systems: Object-Oriented Design with C++

    NASA Technical Reports Server (NTRS)

    Bekele, Gete

    2002-01-01

    This document explores the use of advanced computer technologies with an emphasis on object-oriented design to be applied in the development of software for a rocket engine to improve vehicle safety and reliability. The primary focus is on phase one of this project, the smart start sequence module. The objectives are: 1) To use current sound software engineering practices, object-orientation; 2) To improve on software development time, maintenance, execution and management; 3) To provide an alternate design choice for control, implementation, and performance.

  8. Computational Models of Exercise on the Advanced Resistance Exercise Device (ARED)

    NASA Technical Reports Server (NTRS)

    Newby, Nate; Caldwell, Erin; Scott-Pandorf, Melissa; Peters,Brian; Fincke, Renita; DeWitt, John; Poutz-Snyder, Lori

    2011-01-01

    Muscle and bone loss remain a concern for crew returning from space flight. The advanced resistance exercise device (ARED) is used for on-orbit resistance exercise to help mitigate these losses. However, characterization of how the ARED loads the body in microgravity has yet to be determined. Computational models allow us to analyze ARED exercise in both 1G and 0G environments. To this end, biomechanical models of the squat, single-leg squat, and deadlift exercise on the ARED have been developed to further investigate bone and muscle forces resulting from the exercises.

  9. Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0

    SciTech Connect

    McCoy, M.; Archer, B.; Hendrickson, B.

    2015-08-27

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individual work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.

  10. Computational methods to extract meaning from text and advance theories of human cognition.

    PubMed

    McNamara, Danielle S

    2011-01-01

    Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA.

  11. Computational methods to extract meaning from text and advance theories of human cognition.

    PubMed

    McNamara, Danielle S

    2011-01-01

    Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA. PMID:25164173

  12. Integrated Computational Materials Engineering (ICME) for Third Generation Advanced High-Strength Steel Development

    SciTech Connect

    Savic, Vesna; Hector, Louis G.; Ezzat, Hesham; Sachdev, Anil K.; Quinn, James; Krupitzer, Ronald; Sun, Xin

    2015-06-01

    This paper presents an overview of a four-year project focused on development of an integrated computational materials engineering (ICME) toolset for third generation advanced high-strength steels (3GAHSS). Following a brief look at ICME as an emerging discipline within the Materials Genome Initiative, technical tasks in the ICME project will be discussed. Specific aims of the individual tasks are multi-scale, microstructure-based material model development using state-of-the-art computational and experimental techniques, forming, toolset assembly, design optimization, integration and technical cost modeling. The integrated approach is initially illustrated using a 980 grade transformation induced plasticity (TRIP) steel, subject to a two-step quenching and partitioning (Q&P) heat treatment, as an example.

  13. A fission matrix based validation protocol for computed power distributions in the advanced test reactor

    SciTech Connect

    Nielsen, J. W.; Nigg, D. W.; LaPorta, A. W.

    2013-07-01

    The Idaho National Laboratory (INL) has been engaged in a significant multi year effort to modernize the computational reactor physics tools and validation procedures used to support operations of the Advanced Test Reactor (ATR) and its companion critical facility (ATRC). Several new protocols for validation of computed neutron flux distributions and spectra as well as for validation of computed fission power distributions, based on new experiments and well-recognized least-squares statistical analysis techniques, have been under development. In the case of power distributions, estimates of the a priori ATR-specific fuel element-to-element fission power correlation and covariance matrices are required for validation analysis. A practical method for generating these matrices using the element-to-element fission matrix is presented, along with a high-order scheme for estimating the underlying fission matrix itself. The proposed methodology is illustrated using the MCNP5 neutron transport code for the required neutronics calculations. The general approach is readily adaptable for implementation using any multidimensional stochastic or deterministic transport code that offers the required level of spatial, angular, and energy resolution in the computed solution for the neutron flux and fission source. (authors)

  14. Recent Advances in VisIt: AMR Streamlines and Query-Driven Visualization

    SciTech Connect

    Weber, Gunther; Ahern, Sean; Bethel, Wes; Borovikov, Sergey; Childs, Hank; Deines, Eduard; Garth, Christoph; Hagen, Hans; Hamann, Bernd; Joy, Kenneth; Martin, David; Meredith, Jeremy; Prabhat,; Pugmire, David; Rubel, Oliver; Van Straalen, Brian; Wu, Kesheng

    2009-11-12

    Adaptive Mesh Refinement (AMR) is a highly effective method for simulations spanning a large range of spatiotemporal scales such as those encountered in astrophysical simulations. Combining research in novel AMR visualization algorithms and basic infrastructure work, the Department of Energy's (DOEs) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) has extended VisIt, an open source visualization tool that can handle AMR data without converting it to alternate representations. This paper focuses on two recent advances in the development of VisIt. First, we have developed streamline computation methods that properly handle multi-domain data sets and utilize effectively multiple processors on parallel machines. Furthermore, we are working on streamline calculation methods that consider an AMR hierarchy and detect transitions from a lower resolution patch into a finer patch and improve interpolation at level boundaries. Second, we focus on visualization of large-scale particle data sets. By integrating the DOE Scientific Data Management (SDM) Center's FastBit indexing technology into VisIt, we are able to reduce particle counts effectively by thresholding and by loading only those particles from disk that satisfy the thresholding criteria. Furthermore, using FastBit it becomes possible to compute parallel coordinate views efficiently, thus facilitating interactive data exploration of massive particle data sets.

  15. High Performance Computing: Advanced Research Projects Agency Should Do More To Foster Program Goals. Report to the Chairman, Committee on Armed Services, House of Representatives.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    High-performance computing refers to the use of advanced computing technologies to solve highly complex problems in the shortest possible time. The federal High Performance Computing and Communications Initiative of the Advanced Research Project Agency (ARPA) attempts to accelerate availability and use of high performance computers and networks.…

  16. Advanced computational sensors technology: testing and evaluation in visible, SWIR, and LWIR imaging

    NASA Astrophysics Data System (ADS)

    Rizk, Charbel G.; Wilson, John P.; Pouliquen, Philippe

    2015-05-01

    The Advanced Computational Sensors Team at the Johns Hopkins University Applied Physics Laboratory and the Johns Hopkins University Department of Electrical and Computer Engineering has been developing advanced readout integrated circuit (ROIC) technology for more than 10 years with a particular focus on the key challenges of dynamic range, sampling rate, system interface and bandwidth, and detector materials or band dependencies. Because the pixel array offers parallel sampling by default, the team successfully demonstrated that adding smarts in the pixel and the chip can increase performance significantly. Each pixel becomes a smart sensor and can operate independently in collecting, processing, and sharing data. In addition, building on the digital circuit revolution, the effective well size can be increased by orders of magnitude within the same pixel pitch over analog designs. This research has yielded an innovative class of a system-on-chip concept: the Flexible Readout and Integration Sensor (FRIS) architecture. All key parameters are programmable and/or can be adjusted dynamically, and this architecture can potentially be sensor and application agnostic. This paper reports on the testing and evaluation of one prototype that can support either detector polarity and includes sample results with visible, short-wavelength infrared (SWIR), and long-wavelength infrared (LWIR) imaging.

  17. Proceedings of the workshop on advanced computer technologies and biological sequencing

    SciTech Connect

    Not Available

    1988-11-01

    The participants in the workshop agree that advanced computer technologies will play a significant role in biological sequencing. They suggest a strategy based on the following four recommendations: define a set of model projects, and develop a complete set of data management and analysis tools for these model projects; seek to consolidate appropriate databases, while allowing for the flexible development and design of tools that will permit further consolidation, In the longer term, develop a coordinated effort that will allow networking of all relevant databases; encourage the development, collection, and distribution of analysis tools; and address user interface issues and encourage the development of graphics and visualization tools. Section 3 of this report elaborates on each of these recommendations. Section 2 contains the tutorials presented at the workshop and a summary of the comments made in the discussion period following the tutorials. These tutorials were an integral part of the workshop: they provided a forum for the discussion of the needs of biologists in managing and analyzing biological sequencing data, and the capabilities of advanced computer technologies in meeting those needs. Also included in Section 2 is an informal paper on fifth generation technologies, prepared by two of the participants. Appendix A contains the documents (edited for grammar) prepared by the participants and groups at the workshop. Appendix B contains the workshop program.

  18. Advanced Simulation and Computing: A Summary Report to the Director's Review

    SciTech Connect

    McCoy, M G; Peck, T

    2003-06-01

    It has now been three years since the Advanced Simulation and Computing Program (ASCI), as managed by Defense and Nuclear Technologies (DNT) Directorate, has been reviewed by this Director's Review Committee (DRC). Since that time, there has been considerable progress for all components of the ASCI Program, and these developments will be highlighted in this document and in the presentations planned for June 9 and 10, 2003. There have also been some name changes. Today, the Program is called ''Advanced Simulation and Computing,'' Although it retains the familiar acronym ASCI, the initiative nature of the effort has given way to sustained services as an integral part of the Stockpile Stewardship Program (SSP). All computing efforts at LLNL and the other two Defense Program (DP) laboratories are funded and managed under ASCI. This includes the so-called legacy codes, which remain essential tools in stockpile stewardship. The contract between the Department of Energy (DOE) and the University of California (UC) specifies an independent appraisal of Directorate technical work and programmatic management. Such represents the work of this DNT Review Committee. Beginning this year, the Laboratory is implementing a new review system. This process was negotiated between UC, the National Nuclear Security Administration (NNSA), and the Laboratory Directors. Central to this approach are eight performance objectives that focus on key programmatic and administrative goals. Associated with each of these objectives are a number of performance measures to more clearly characterize the attainment of the objectives. Each performance measure has a lead directorate and one or more contributing directorates. Each measure has an evaluation plan and has identified expected documentation to be included in the ''Assessment File''.

  19. Recent advances in computational methodology for simulation of mechanical circulatory assist devices

    PubMed Central

    Marsden, Alison L.; Bazilevs, Yuri; Long, Christopher C.; Behr, Marek

    2014-01-01

    Ventricular assist devices (VADs) provide mechanical circulatory support to offload the work of one or both ventricles during heart failure. They are used in the clinical setting as destination therapy, as bridge to transplant, or more recently as bridge to recovery to allow for myocardial remodeling. Recent developments in computational simulation allow for detailed assessment of VAD hemodynamics for device design and optimization for both children and adults. Here, we provide a focused review of the recent literature on finite element methods and optimization for VAD simulations. As VAD designs typically fall into two categories, pulsatile and continuous flow devices, we separately address computational challenges of both types of designs, and the interaction with the circulatory system with three representative case studies. In particular, we focus on recent advancements in finite element methodology that has increased the fidelity of VAD simulations. We outline key challenges, which extend to the incorporation of biological response such as thrombosis and hemolysis, as well as shape optimization methods and challenges in computational methodology. PMID:24449607

  20. Advanced Computational Modeling of Vapor Deposition in a High-Pressure Reactor

    NASA Technical Reports Server (NTRS)

    Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus

    2004-01-01

    In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.

  1. Advanced Computational Modeling of Vapor Deposition in a High-pressure Reactor

    NASA Technical Reports Server (NTRS)

    Cardelino, Beatriz H.; Moore, Craig E.; McCall, Sonya D.; Cardelino, Carlos A.; Dietz, Nikolaus; Bachmann, Klaus

    2004-01-01

    In search of novel approaches to produce new materials for electro-optic technologies, advances have been achieved in the development of computer models for vapor deposition reactors in space. Numerical simulations are invaluable tools for costly and difficult processes, such as those experiments designed for high pressures and microgravity conditions. Indium nitride is a candidate compound for high-speed laser and photo diodes for optical communication system, as well as for semiconductor lasers operating into the blue and ultraviolet regions. But InN and other nitride compounds exhibit large thermal decomposition at its optimum growth temperature. In addition, epitaxy at lower temperatures and subatmospheric pressures incorporates indium droplets into the InN films. However, surface stabilization data indicate that InN could be grown at 900 K in high nitrogen pressures, and microgravity could provide laminar flow conditions. Numerical models for chemical vapor deposition have been developed, coupling complex chemical kinetics with fluid dynamic properties.

  2. Advanced computational tools for optimization and uncertainty quantification of carbon capture processes

    SciTech Connect

    Miller, David C.; Ng, Brenda; Eslick, John

    2014-01-01

    Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and universities that is developing, demonstrating, and deploying a suite of multi-scale modeling and simulation tools. One significant computational tool is FOQUS, a Framework for Optimization and Quantification of Uncertainty and Sensitivity, which enables basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to rapidly synthesize and optimize a process and determine the level of uncertainty associated with the resulting process. The overall approach of CCSI is described with a more detailed discussion of FOQUS and its application to carbon capture systems.

  3. Development of an Advanced Computational Model for OMCVD of Indium Nitride

    NASA Technical Reports Server (NTRS)

    Cardelino, Carlos A.; Moore, Craig E.; Cardelino, Beatriz H.; Zhou, Ning; Lowry, Sam; Krishnan, Anantha; Frazier, Donald O.; Bachmann, Klaus J.

    1999-01-01

    An advanced computational model is being developed to predict the formation of indium nitride (InN) film from the reaction of trimethylindium (In(CH3)3) with ammonia (NH3). The components are introduced into the reactor in the gas phase within a background of molecular nitrogen (N2). Organometallic chemical vapor deposition occurs on a heated sapphire surface. The model simulates heat and mass transport with gas and surface chemistry under steady state and pulsed conditions. The development and validation of an accurate model for the interactions between the diffusion of gas phase species and surface kinetics is essential to enable the regulation of the process in order to produce a low defect material. The validation of the model will be performed in concert with a NASA-North Carolina State University project.

  4. Design and experimental analysis of an advanced static VAR compensator with computer aided control.

    PubMed

    Irmak, Erdal; Bayındır, Ramazan; Köse, Ali

    2016-09-01

    This study presents integration of a real-time energy monitoring and control system with an advanced reactive power compensation unit based on fixed capacitor-thyristor controlled reactor (FC-TCR). Firing angles of the thyristors located in the FC-TCR are controlled by a microcontroller in order to keep the power factor within the limits. Electrical parameters of the system are measured by specially designed circuits and simultaneously transferred to the computer via a data acquisition board. Thus, real time data of the system can be observed through a visual user interface. The data obtained is not only analyzed for control process, but also regularly saved into a database. The system has been tested in laboratory conditions under different load characteristics and experimental results verified that the system successfully and accurately achieves compensation process against the all operational conditions.

  5. Building a Universal Nuclear Energy Density Functional (UNEDF): SciDAC-2 Project

    SciTech Connect

    Carlson, Joe; Furnstahl, Dick; Lusk, Rusty; Nazarewicz, Witek; Ng, Esmond; Thompson, Ian; Vary, James

    2012-06-30

    An understanding of the properties of atomic nuclei is crucial for a complete nuclear theory, for element formation, for properties of stars, and for present and future energy and defense applications. During the period of Dec. 1, 2006 - Jun. 30, 2012, the UNEDF collaboration carried out a comprehensive study of all nuclei based on the most accurate knowledge of the strong nuclear interaction, the most reliable theoretical approaches, the most advanced algorithms, and extensive computational resources, with a view towards scaling to the petaflop platforms and beyond. The long-term vision initiated with UNEDF is to arrive at a comprehensive, quantitative, and unified description of nuclei and their reactions, grounded in the fundamental interactions between the constituent nucleons. We seek to replace current phenomenological models of nuclear structure and reactions with a well-founded microscopic theory that delivers maximum predictive power with well-quantified uncertainties. Specifically, the mission of this project has been three-fold: first, to find an optimal energy density functional (EDF) using all our knowledge of the nucleonic Hamiltonian and basic nuclear properties; second, to apply the EDF theory and its extensions to validate the functional using all the available relevant nuclear structure and reaction data; and third, to apply the validated theory to properties of interest that cannot be measured, in particular the properties needed for reaction theory.

  6. A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.

  7. Current Advances in the Computational Simulation of the Formation of Low-Mass Stars

    SciTech Connect

    Klein, R I; Inutsuka, S; Padoan, P; Tomisaka, K

    2005-10-24

    Developing a theory of low-mass star formation ({approx} 0.1 to 3 M{sub {circle_dot}}) remains one of the most elusive and important goals of theoretical astrophysics. The star-formation process is the outcome of the complex dynamics of interstellar gas involving non-linear interactions of turbulence, gravity, magnetic field and radiation. The evolution of protostellar condensations, from the moment they are assembled by turbulent flows to the time they reach stellar densities, spans an enormous range of scales, resulting in a major computational challenge for simulations. Since the previous Protostars and Planets conference, dramatic advances in the development of new numerical algorithmic techniques have been successfully implemented on large scale parallel supercomputers. Among such techniques, Adaptive Mesh Refinement and Smooth Particle Hydrodynamics have provided frameworks to simulate the process of low-mass star formation with a very large dynamic range. It is now feasible to explore the turbulent fragmentation of molecular clouds and the gravitational collapse of cores into stars self-consistently within the same calculation. The increased sophistication of these powerful methods comes with substantial caveats associated with the use of the techniques and the interpretation of the numerical results. In this review, we examine what has been accomplished in the field and present a critique of both numerical methods and scientific results. We stress that computational simulations should obey the available observational constraints and demonstrate numerical convergence. Failing this, results of large scale simulations do not advance our understanding of low-mass star formation.

  8. Identifying human disease genes: advances in molecular genetics and computational approaches.

    PubMed

    Bakhtiar, S M; Ali, A; Baig, S M; Barh, D; Miyoshi, A; Azevedo, V

    2014-07-04

    The human genome project is one of the significant achievements that have provided detailed insight into our genetic legacy. During the last two decades, biomedical investigations have gathered a considerable body of evidence by detecting more than 2000 disease genes. Despite the imperative advances in the genetic understanding of various diseases, the pathogenesis of many others remains obscure. With recent advances, the laborious methodologies used to identify DNA variations are replaced by direct sequencing of genomic DNA to detect genetic changes. The ability to perform such studies depends equally on the development of high-throughput and economical genotyping methods. Currently, basically for every disease whose origen is still unknown, genetic approaches are available which could be pedigree-dependent or -independent with the capacity to elucidate fundamental disease mechanisms. Computer algorithms and programs for linkage analysis have formed the foundation for many disease gene detection projects, similarly databases of clinical findings have been widely used to support diagnostic decisions in dysmorphology and general human disease. For every disease type, genome sequence variations, particularly single nucleotide polymorphisms are mapped by comparing the genetic makeup of case and control groups. Methods that predict the effects of polymorphisms on protein stability are useful for the identification of possible disease associations, whereas structural effects can be assessed using methods to predict stability changes in proteins using sequence and/or structural information.

  9. Computational fluid dynamics in the design and analysis of thermal processes: a review of recent advances.

    PubMed

    Norton, Tomás; Tiwari, Brijesh; Sun, Da Wen

    2013-01-01

    The design of thermal processes in the food industry has undergone great developments in the last two decades due to the availability of cheap computer power alongside advanced modelling techniques such as computational fluid dynamics (CFD). CFD uses numerical algorithms to solve the non-linear partial differential equations of fluid mechanics and heat transfer so that the complex mechanisms that govern many food-processing systems can be resolved. In thermal processing applications, CFD can be used to build three-dimensional models that are both spatially and temporally representative of a physical system to produce solutions with high levels of physical realism without the heavy costs associated with experimental analyses. Therefore, CFD is playing an ever growing role in the development of optimization of conventional as well as the development of new thermal processes in the food industry. This paper discusses the fundamental aspects involved in developing CFD solutions and forms a state-of-the-art review on various CFD applications in conventional as well as novel thermal processes. The challenges facing CFD modellers of thermal processes are also discussed. From this review it is evident that present-day CFD software, with its rich tapestries of mathematical physics, numerical methods and visualization techniques, is currently recognized as a formidable and pervasive technology which can permit comprehensive analyses of thermal processing.

  10. Computational and experimental advances in drug repositioning for accelerated therapeutic stratification.

    PubMed

    Shameer, Khader; Readhead, Ben; Dudley, Joel T

    2015-01-01

    Drug repositioning is an important component of therapeutic stratification in the precision medicine paradigm. Molecular profiling and more sophisticated analysis of longitudinal clinical data are refining definitions of human diseases, creating needs and opportunities to re-target or reposition approved drugs for alternative indications. Drug repositioning studies have demonstrated success in complex diseases requiring improved therapeutic interventions as well as orphan diseases without any known treatments. An increasing collection of available computational and experimental methods that leverage molecular and clinical data enable diverse drug repositioning strategies. Integration of translational bioinformatics resources, statistical methods, chemoinformatics tools and experimental techniques (including medicinal chemistry techniques) can enable the rapid application of drug repositioning on an increasingly broad scale. Efficient tools are now available for systematic drug-repositioning methods using large repositories of compounds with biological activities. Medicinal chemists along with other translational researchers can play a key role in various aspects of drug repositioning. In this review article, we briefly summarize the history of drug repositioning, explain concepts behind drug repositioning methods, discuss recent computational and experimental advances and highlight available open access resources for effective drug repositioning investigations. We also discuss recent approaches in utilizing electronic health record for outcome assessment of drug repositioning and future avenues of drug repositioning in the light of targeting disease comorbidities, underserved patient communities, individualized medicine and socioeconomic impact.

  11. Advanced display object selection methods for enhancing user-computer productivity

    NASA Technical Reports Server (NTRS)

    Osga, Glenn A.

    1993-01-01

    The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.

  12. ABrIL - Advanced Brain Imaging Lab : a cloud based computation environment for cooperative neuroimaging projects.

    PubMed

    Neves Tafula, Sérgio M; Moreira da Silva, Nádia; Rozanski, Verena E; Silva Cunha, João Paulo

    2014-01-01

    Neuroscience is an increasingly multidisciplinary and highly cooperative field where neuroimaging plays an important role. Neuroimaging rapid evolution is demanding for a growing number of computing resources and skills that need to be put in place at every lab. Typically each group tries to setup their own servers and workstations to support their neuroimaging needs, having to learn from Operating System management to specific neuroscience software tools details before any results can be obtained from each setup. This setup and learning process is replicated in every lab, even if a strong collaboration among several groups is going on. In this paper we present a new cloud service model - Brain Imaging Application as a Service (BiAaaS) - and one of its implementation - Advanced Brain Imaging Lab (ABrIL) - in the form of an ubiquitous virtual desktop remote infrastructure that offers a set of neuroimaging computational services in an interactive neuroscientist-friendly graphical user interface (GUI). This remote desktop has been used for several multi-institution cooperative projects with different neuroscience objectives that already achieved important results, such as the contribution to a high impact paper published in the January issue of the Neuroimage journal. The ABrIL system has shown its applicability in several neuroscience projects with a relatively low-cost, promoting truly collaborative actions and speeding up project results and their clinical applicability.

  13. Current advances in molecular, biochemical, and computational modeling analysis of microalgal triacylglycerol biosynthesis.

    PubMed

    Lenka, Sangram K; Carbonaro, Nicole; Park, Rudolph; Miller, Stephen M; Thorpe, Ian; Li, Yantao

    2016-01-01

    Triacylglycerols (TAGs) are highly reduced energy storage molecules ideal for biodiesel production. Microalgal TAG biosynthesis has been studied extensively in recent years, both at the molecular level and systems level through experimental studies and computational modeling. However, discussions of the strategies and products of the experimental and modeling approaches are rarely integrated and summarized together in a way that promotes collaboration among modelers and biologists in this field. In this review, we outline advances toward understanding the cellular and molecular factors regulating TAG biosynthesis in unicellular microalgae with an emphasis on recent studies on rate-limiting steps in fatty acid and TAG synthesis, while also highlighting new insights obtained from the integration of multi-omics datasets with mathematical models. Computational methodologies such as kinetic modeling, metabolic flux analysis, and new variants of flux balance analysis are explained in detail. We discuss how these methods have been used to simulate algae growth and lipid metabolism in response to changing culture conditions and how they have been used in conjunction with experimental validations. Since emerging evidence indicates that TAG synthesis in microalgae operates through coordinated crosstalk between multiple pathways in diverse subcellular destinations including the endoplasmic reticulum and plastids, we discuss new experimental studies and models that incorporate these findings for discovering key regulatory checkpoints. Finally, we describe tools for genetic manipulation of microalgae and their potential for future rational algal strain design. This comprehensive review explores the potential synergistic impact of pathway analysis, computational approaches, and molecular genetic manipulation strategies on improving TAG production in microalgae.

  14. Current advances in molecular, biochemical, and computational modeling analysis of microalgal triacylglycerol biosynthesis.

    PubMed

    Lenka, Sangram K; Carbonaro, Nicole; Park, Rudolph; Miller, Stephen M; Thorpe, Ian; Li, Yantao

    2016-01-01

    Triacylglycerols (TAGs) are highly reduced energy storage molecules ideal for biodiesel production. Microalgal TAG biosynthesis has been studied extensively in recent years, both at the molecular level and systems level through experimental studies and computational modeling. However, discussions of the strategies and products of the experimental and modeling approaches are rarely integrated and summarized together in a way that promotes collaboration among modelers and biologists in this field. In this review, we outline advances toward understanding the cellular and molecular factors regulating TAG biosynthesis in unicellular microalgae with an emphasis on recent studies on rate-limiting steps in fatty acid and TAG synthesis, while also highlighting new insights obtained from the integration of multi-omics datasets with mathematical models. Computational methodologies such as kinetic modeling, metabolic flux analysis, and new variants of flux balance analysis are explained in detail. We discuss how these methods have been used to simulate algae growth and lipid metabolism in response to changing culture conditions and how they have been used in conjunction with experimental validations. Since emerging evidence indicates that TAG synthesis in microalgae operates through coordinated crosstalk between multiple pathways in diverse subcellular destinations including the endoplasmic reticulum and plastids, we discuss new experimental studies and models that incorporate these findings for discovering key regulatory checkpoints. Finally, we describe tools for genetic manipulation of microalgae and their potential for future rational algal strain design. This comprehensive review explores the potential synergistic impact of pathway analysis, computational approaches, and molecular genetic manipulation strategies on improving TAG production in microalgae. PMID:27321475

  15. Temporality Matters: Advancing a Method for Analyzing Problem-Solving Processes in a Computer-Supported Collaborative Environment

    ERIC Educational Resources Information Center

    Kapur, Manu

    2011-01-01

    This paper argues for a need to develop methods for examining temporal patterns in computer-supported collaborative learning (CSCL) groups. It advances one such quantitative method--Lag-sequential Analysis (LsA)--and instantiates it in a study of problem-solving interactions of collaborative groups in an online, synchronous environment. LsA…

  16. Using an Advanced Computational Laboratory Experiment to Extend and Deepen Physical Chemistry Students' Understanding of Atomic Structure

    ERIC Educational Resources Information Center

    Hoffman, Gary G.

    2015-01-01

    A computational laboratory experiment is described, which involves the advanced study of an atomic system. The students use concepts and techniques typically covered in a physical chemistry course but extend those concepts and techniques to more complex situations. The students get a chance to explore the study of atomic states and perform…

  17. Computer experiments on periodic systems identification using rotor blade transient flapping-torsion responses at high advance ratio

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Prelewicz, D. A.

    1974-01-01

    Systems identification methods have recently been applied to rotorcraft to estimate stability derivatives from transient flight control response data. While these applications assumed a linear constant coefficient representation of the rotorcraft, the computer experiments described in this paper used transient responses in flap-bending and torsion of a rotor blade at high advance ratio which is a rapidly time varying periodic system.

  18. ISAAC: An Introduction to IBM's Information System for Advanced Academic Computing at the University of Washington-Seattle.

    ERIC Educational Resources Information Center

    Hernandez, Nicolas, Jr.

    1988-01-01

    Traces the origin of ISAAC (Information System for Advanced Academic Computing) and the development of a languages and linguistics "room" at the University of Washington-Seattle. ISAAC, a free, valuable resource, consists of two databases and an electronic bulletin board spanning broad areas of pedagogical and research fields. (Author/CB)

  19. Improved computational neutronics methods and validation protocols for the advanced test reactor

    SciTech Connect

    Nigg, D. W.; Nielsen, J. W.; Chase, B. M.; Murray, R. K.; Steuhm, K. A.; Unruh, T.

    2012-07-01

    The Idaho National Laboratory (INL) is in the process of updating the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purposes. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry have been conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for flexible and repeatable ATR physics code validation protocols that are consistent with applicable national standards. (authors)

  20. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1976-01-01

    Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.

  1. IMPROVED COMPUTATIONAL NEUTRONICS METHODS AND VALIDATION PROTOCOLS FOR THE ADVANCED TEST REACTOR

    SciTech Connect

    David W. Nigg; Joseph W. Nielsen; Benjamin M. Chase; Ronnie K. Murray; Kevin A. Steuhm

    2012-04-01

    The Idaho National Laboratory (INL) is in the process of modernizing the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009 was successfully completed during 2011. This demonstration supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR fuel cycle management process beginning in 2012. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry were conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for a flexible, easily-repeatable ATR physics code validation protocol that is consistent with applicable ASTM standards.

  2. Evaluation of Computed Tomography of Mock Uranium Fuel Rods at the Advanced Photon Source

    DOE PAGES

    Hunter, James F.; Brown, Donald William; Okuniewski, Maria

    2015-06-01

    This study discusses a multi-year effort to evaluate the utility of computed tomography at the Advanced Photon Source (APS) as a tool for non-destructive evaluation of uranium based fuel rods. The majority of the data presented is on mock material made with depleted uranium which mimics the x-ray attenuation characteristics of fuel rods while allowing for simpler handling. A range of data is presented including full thickness (5mm diameter) fuel rodlets, reduced thickness (1.8mm) sintering test samples, and pre/post irradiation samples (< 1mm thick). These data were taken on both a white beam (bending magnet) beamline and a high energy,more » monochromatic beamline. This data shows the utility of a synchrotron type source in the evealuation of manufacturing defects (pre-irradiation) and lays out the case for in situ CT of fuel pellet sintering. Finally, in addition data is shown from small post-irradiation samples and a case is made for post-irradiation CT of larger samples.« less

  3. Evaluation of Computed Tomography of Mock Uranium Fuel Rods at the Advanced Photon Source

    SciTech Connect

    Hunter, James F.; Brown, Donald William; Okuniewski, Maria

    2015-06-01

    This study discusses a multi-year effort to evaluate the utility of computed tomography at the Advanced Photon Source (APS) as a tool for non-destructive evaluation of uranium based fuel rods. The majority of the data presented is on mock material made with depleted uranium which mimics the x-ray attenuation characteristics of fuel rods while allowing for simpler handling. A range of data is presented including full thickness (5mm diameter) fuel rodlets, reduced thickness (1.8mm) sintering test samples, and pre/post irradiation samples (< 1mm thick). These data were taken on both a white beam (bending magnet) beamline and a high energy, monochromatic beamline. This data shows the utility of a synchrotron type source in the evealuation of manufacturing defects (pre-irradiation) and lays out the case for in situ CT of fuel pellet sintering. Finally, in addition data is shown from small post-irradiation samples and a case is made for post-irradiation CT of larger samples.

  4. Vector Field Visual Data Analysis Technologies for Petascale Computational Science

    SciTech Connect

    Garth, Christoph; Deines, Eduard; Joy, Kenneth I.; Bethel, E. Wes; Childs, Hank; Weber, Gunther; Ahern, Sean; Pugmire, Dave; Sanderson, Allen; Johnson, Chris

    2009-11-13

    State-of-the-art computational science simulations generate large-scale vector field data sets. Visualization and analysis is a key aspect of obtaining insight into these data sets and represents an important challenge. This article discusses possibilities and challenges of modern vector field visualization and focuses on methods and techniques developed in the SciDAC Visualization and Analytics Center for Enabling Technologies (VACET) and deployed in the open-source visualization tool, VisIt.

  5. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  6. Recent Advances and Issues in Computers. Oryx Frontiers of Science Series.

    ERIC Educational Resources Information Center

    Gay, Martin K.

    Discussing recent issues in computer science, this book contains 11 chapters covering: (1) developments that have the potential for changing the way computers operate, including microprocessors, mass storage systems, and computing environments; (2) the national computational grid for high-bandwidth, high-speed collaboration among scientists, and…

  7. New directions in scientific computing: impact of advances in microprocessor architecture and system design.

    PubMed

    Malyj, W; Smith, R E; Horowitz, J M

    1984-01-01

    The new generation of microcomputers has brought computing power previously restricted to mainframe and supermini computers within the reach of individual scientific laboratories. Microcomputers can now provide computing speeds rivaling mainframes and computational accuracies exceeding those available in most computer centers. Inexpensive memory makes possible the transfer to microcomputers of software packages developed for mainframes and tested by years of experience. Combinations of high level languages and assembler subroutines permit the efficient design of specialized applications programs. Microprocessor architecture is approaching that of superminis, with coprocessors providing major contributions to computing power. The combined result of these developments is a major and perhaps revolutionary increase in the computing power now available to scientists.

  8. ENHANCING THE ATOMIC-LEVEL UNDERSTANDING OF CO2 MINERAL SEQUESTRATION MECHANISMS VIA ADVANCED COMPUTATIONAL MODELING

    SciTech Connect

    A.V.G. Chizmeshya

    2003-12-19

    /NETL managed National Mineral Sequestration Working Group we have already significantly improved our understanding of mineral carbonation. Group members at the Albany Research Center have recently shown that carbonation of olivine and serpentine, which naturally occurs over geological time (i.e., 100,000s of years), can be accelerated to near completion in hours. Further process refinement will require a synergetic science/engineering approach that emphasizes simultaneous investigation of both thermodynamic processes and the detailed microscopic, atomic-level mechanisms that govern carbonation kinetics. Our previously funded Phase I Innovative Concepts project demonstrated the value of advanced quantum-mechanical modeling as a complementary tool in bridging important gaps in our understanding of the atomic/molecular structure and reaction mechanisms that govern CO{sub 2} mineral sequestration reaction processes for the model Mg-rich lamellar hydroxide feedstock material Mg(OH){sub 2}. In the present simulation project, improved techniques and more efficient computational schemes have allowed us to expand and augment these capabilities and explore more complex Mg-rich, lamellar hydroxide-based feedstock materials, including the serpentine-based minerals. These feedstock materials are being actively investigated due to their wide availability, and low-cost CO{sub 2} mineral sequestration potential. Cutting-edge first principles quantum chemical, computational solid-state and materials simulation methodology studies proposed herein, have been strategically integrated with our new DOE supported (ASU-Argonne National Laboratory) project to investigate the mechanisms that govern mineral feedstock heat-treatment and aqueous/fluid-phase serpentine mineral carbonation in situ. This unified, synergetic theoretical and experimental approach will provide a deeper understanding of the key reaction mechanisms than either individual approach can alone. Ab initio techniques will also

  9. ENHANCING THE ATOMIC-LEVEL UNDERSTANDING OF CO2 MINERAL SEQUESTRATION MECHANISMS VIA ADVANCED COMPUTATIONAL MODELING

    SciTech Connect

    A.V.G. Chizmeshya; M.J. McKelvy; G.H. Wolf; R.W. Carpenter; D.A. Gormley; J.R. Diefenbacher; R. Marzke

    2006-03-01

    significantly improved our understanding of mineral carbonation. Group members at the Albany Research Center have recently shown that carbonation of olivine and serpentine, which naturally occurs over geological time (i.e., 100,000s of years), can be accelerated to near completion in hours. Further process refinement will require a synergetic science/engineering approach that emphasizes simultaneous investigation of both thermodynamic processes and the detailed microscopic, atomic-level mechanisms that govern carbonation kinetics. Our previously funded Phase I Innovative Concepts project demonstrated the value of advanced quantum-mechanical modeling as a complementary tool in bridging important gaps in our understanding of the atomic/molecular structure and reaction mechanisms that govern CO2 mineral sequestration reaction processes for the model Mg-rich lamellar hydroxide feedstock material Mg(OH)2. In the present simulation project, improved techniques and more efficient computational schemes have allowed us to expand and augment these capabilities and explore more complex Mg-rich, lamellar hydroxide-based feedstock materials, including the serpentine-based minerals. These feedstock materials are being actively investigated due to their wide availability, and low-cost CO2 mineral sequestration potential. Cutting-edge first principles quantum chemical, computational solid-state and materials simulation methodology studies proposed herein, have been strategically integrated with our new DOE supported (ASU-Argonne National Laboratory) project to investigate the mechanisms that govern mineral feedstock heat-treatment and aqueous/fluid-phase serpentine mineral carbonation in situ. This unified, synergetic theoretical and experimental approach has provided a deeper understanding of the key reaction mechanisms than either individual approach can alone. We used ab initio techniques to significantly advance our understanding of atomic-level processes at the solid/solution interface by

  10. ENHANCING THE ATOMIC-LEVEL UNDERSTANDING OF CO2 MINERAL SEQUESTRATION MECHANISMS VIA ADVANCED COMPUTATIONAL MODELING

    SciTech Connect

    A.V.G. Chizmeshya

    2002-12-19

    /NETL managed National Mineral Sequestration Working Group we have already significantly improved our understanding of mineral carbonation. Group members at the Albany Research Center have recently shown that carbonation of olivine and serpentine, which naturally occurs over geological time (i.e., 100,000s of years), can be accelerated to near completion in hours. Further process refinement will require a synergetic science/engineering approach that emphasizes simultaneous investigation of both thermodynamic processes and the detailed microscopic, atomic-level mechanisms that govern carbonation kinetics. Our previously funded Phase I Innovative Concepts project demonstrated the value of advanced quantum-mechanical modeling as a complementary tool in bridging important gaps in our understanding of the atomic/molecular structure and reaction mechanisms that govern CO{sub 2} mineral sequestration reaction processes for the model Mg-rich lamellar hydroxide feedstock material Mg(OH){sub 2}. In the present simulation project, improved techniques and more efficient computational schemes have allowed us to expand and augment these capabilities and explore more complex Mg-rich, lamellar hydroxide-based feedstock materials, including the serpentine-based minerals. These feedstock materials are being actively investigated due to their wide availability, and low-cost CO{sub 2} mineral sequestration potential. Cutting-edge first principles quantum chemical, computational solid-state and materials simulation methodology studies proposed herein, have been strategically integrated with our new DOE supported (ASU-Argonne National Laboratory) project to investigate the mechanisms that govern mineral feedstock heat-treatment and aqueous/fluid-phase serpentine mineral carbonation in situ. This unified, synergetic theoretical and experimental approach will provide a deeper understanding of the key reaction mechanisms than either individual approach can alone. Ab initio techniques will also

  11. In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

    SciTech Connect

    G. R. Odette; G. E. Lucas

    2005-11-15

    This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.

  12. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  13. Contribution to: SciDAC Progess Report - Collaborative Design and Development of the Community Climate System Model for Terascale Computing

    SciTech Connect

    Cameron-Smith, P; Caldeira, K; Taylor, J; Lamarque, J F

    2003-10-17

    Since pre-industrial times, the concentrations of various aerosol types (e.g., sulfate, black carbon, and mineral dust) and several key greenhouse gases such as methane (CH{sub 4}), nitrous oxide (N{sub 2}O), and ozone (O{sub 3}), have been changing because of anthropogenic activities. Collectively, the magnitude of the climate forcing from these species is larger than that of carbon dioxide (CO{sub 2}) although some are positive and some are negative (see Fig. 27). The behavior and effect of these non-CO{sub 2} species is more complicated than for CO{sub 2} because they are affected by atmospheric chemistry and aerosol microphysics, so their distributions are more heterogeneous. There are also feedbacks between climate, chemistry, and aerosols that further increase the importance of chemistry and aerosols, e.g. a change in any one of stratospheric ozone, stratospheric temperature, or stratospheric dynamics will feedback on the other two. For aerosols, in addition to the direct effect of scattering and absorbing light, they act indirectly by serving as cloud condensation nuclei (CCN), leading to clouds with more (but smaller) droplets that reflect more sunlight and last longer, thus cooling the atmosphere. Aerosols and atmospheric chemistry can also have an impact through interaction with the biosphere, e.g., fertilization of the land with nitrogen species and fertilization of the oceans with Iron from mineral dust. There is also chemical production of CO{sub 2} in the atmosphere through oxidation of species such as CH{sub 4}, CO and terpenes. Thus, to predict and understand future climates, the radiative forcing from these non-CO{sub 2} gases and aerosols, as well as their feedbacks into the radiative, dynamical, and biogeochemical balances, must be taken into account. The non-CO{sub 2} species are also important because they should be more amenable to anthropogenic control measures trying to mitigate climate change (Hansen et al., 2000) than CO{sub 2} because they have shorter atmospheric lifetimes.

  14. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  15. Final Report- "An Algorithmic and Software Framework for Applied Partial Differential Equations (APDEC): A DOE SciDAC Integrated Software Infrastructure Center (ISIC)

    SciTech Connect

    Elbridge Gerry Puckett

    2008-05-13

    All of the work conducted under the auspices of DE-FC02-01ER25473 was characterized by exceptionally close collaboration with researchers at the Lawrence Berkeley National Laboratory (LBNL). This included having one of my graduate students - Sarah Williams - spend the summer working with Dr. Ann Almgren a staff scientist in the Center for Computational Sciences and Engineering (CCSE) which is a part of the National Energy Research Supercomputer Center (NERSC) at LBNL. As a result of this visit Sarah decided to work on a problem suggested by Dr. John Bell the head of CCSE for her PhD thesis, which she finished in June 2007. Writing a PhD thesis while working at one of the University of California (UC) managed DOE laboratories is a long established tradition at the University of California and I have always encouraged my students to consider doing this. For example, in 2000 one of my graduate students - Matthew Williams - finished his PhD thesis while working with Dr. Douglas Kothe at the Los Alamos National Laboratory (LANL). Matt is now a staff scientist in the Diagnostic Applications Group in the Applied Physics Division at LANL. Another one of my graduate students - Christopher Algieri - who was partially supported with funds from DE-FC02-01ER25473 wrote am MS Thesis that analyzed and extended work published by Dr. Phil Colella and his colleagues in 1998. Dr. Colella is the head of the Applied Numerical Algorithms Group (ANAG) in the National Energy Research Supercomputer Center at LBNL and is the lead PI for the APDEC ISIC which was comprised of several National Laboratory research groups and at least five University PI's at five different universities. Chris Algieri is now employed as a staff member in Dr. Bill Collins' research group at LBNL developing computational models for climate change research. Bill Collins was recently hired at LBNL to start and be the Head of the Climate Science Department in the Earth Sciences Division at LBNL. Prior to this he had

  16. Proceedings of the topical meeting on advances in human factors research on man/computer interactions

    SciTech Connect

    Not Available

    1990-01-01

    This book discusses the following topics: expert systems and knowledge engineering-I; verification and validation of software; methods for modeling UMAN/computer performance; MAN/computer interaction problems in producing procedures -1-2; progress and problems with automation-1-2; experience with electronic presentation of procedures-2; intelligent displays and monitors; modeling user/computer interface; and computer-based human decision-making aids.

  17. Development of 3D multimedia with advanced computer animation tools for outreach activities related to Meteor Science and Meteoritics

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.

    2012-09-01

    Documentaries related to Astronomy and Planetary Sciences are a common and very attractive way to promote the interest of the public in these areas. These educational tools can get benefit from new advanced computer animation software and 3D technologies, as these allow making these documentaries even more attractive. However, special care must be taken in order to guarantee that the information contained in them is serious and objective. In this sense, an additional value is given when the footage is produced by the own researchers. With this aim, a new documentary produced and directed by Prof. Madiedo has been developed. The documentary, which has been entirely developed by means of advanced computer animation tools, is dedicated to several aspects of Meteor Science and Meteoritics. The main features of this outreach and education initiative are exposed here.

  18. Grand Challenges of Advanced Computing for Energy Innovation Report from the Workshop Held July 31-August 2, 2012

    SciTech Connect

    Larzelere, Alex R.; Ashby, Steven F.; Christensen, Dana C.; Crawford, Dona L.; Khaleel, Mohammad A.; John, Grosh; Stults, B. Ray; Lee, Steven L.; Hammond, Steven W.; Grover, Benjamin T.; Neely, Rob; Dudney, Lee Ann; Goldstein, Noah C.; Wells, Jack; Peltz, Jim

    2013-03-06

    On July 31-August 2 of 2012, the U.S. Department of Energy (DOE) held a workshop entitled Grand Challenges of Advanced Computing for Energy Innovation. This workshop built on three earlier workshops that clearly identified the potential for the Department and its national laboratories to enable energy innovation. The specific goal of the workshop was to identify the key challenges that the nation must overcome to apply the full benefit of taxpayer-funded advanced computing technologies to U.S. energy innovation in the ways that the country produces, moves, stores, and uses energy. Perhaps more importantly, the workshop also developed a set of recommendations to help the Department overcome those challenges. These recommendations provide an action plan for what the Department can do in the coming years to improve the nation’s energy future.

  19. Advanced Placement Computer Science (with Pascal). Teacher's Guide. Volume 1. Second Edition.

    ERIC Educational Resources Information Center

    Farkouh, Alice; And Others

    The purpose of this guide is to give teachers and supervisors a working knowledge of various approaches to enhancing pupil learning about computer science, particularly through the use of Pascal. It contains instructional units dealing with: (1) computer components; (2) computer languages; (3) compilers; (4) essential features of a Pascal program;…

  20. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  1. Computational structural mechanics and fluid dynamics: Advances and trends; Proceedings of the Symposium, Washington, DC, Oct. 17-19, 1988

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor); Dwoyer, Douglas L. (Editor)

    1988-01-01

    Recent advances in computational structural and fluid dynamics are discussed in reviews and reports. Topics addressed include fluid-structure interaction and aeroelasticity, CFD techniques for reacting flows, micromechanics, stability and eigenproblems, probabilistic methods and chaotic dynamics, and perturbation and spectral methods. Consideration is given to finite-element, finite-volume, and boundary-element methods; adaptive methods; parallel processing machines and applications; and visualization, mesh generation, and AI interfaces.

  2. Advances in computer simulation of genome evolution: toward more realistic evolutionary genomics analysis by approximate bayesian computation.

    PubMed

    Arenas, Miguel

    2015-04-01

    NGS technologies present a fast and cheap generation of genomic data. Nevertheless, ancestral genome inference is not so straightforward due to complex evolutionary processes acting on this material such as inversions, translocations, and other genome rearrangements that, in addition to their implicit complexity, can co-occur and confound ancestral inferences. Recently, models of genome evolution that accommodate such complex genomic events are emerging. This letter explores these novel evolutionary models and proposes their incorporation into robust statistical approaches based on computer simulations, such as approximate Bayesian computation, that may produce a more realistic evolutionary analysis of genomic data. Advantages and pitfalls in using these analytical methods are discussed. Potential applications of these ancestral genomic inferences are also pointed out.

  3. Application of advanced computational procedures for modeling solar-wind interactions with Venus: Theory and computer code

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.

    1980-01-01

    Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.

  4. Recent technological advances in computed tomography and the clinical impact therein.

    PubMed

    Runge, Val M; Marquez, Herman; Andreisek, Gustav; Valavanis, Anton; Alkadhi, Hatem

    2015-02-01

    Current technological advances in CT, specifically those with a major impact on clinical imaging, are discussed. The intent was to provide for both medical physicists and practicing radiologists a summary of the clinical impact of each advance, offering guidance in terms of utility and day-to-day clinical implementation, with specific attention to radiation dose reduction.

  5. Encouraging Advanced Second Language Speakers to Recognise Their Language Difficulties: A Personalised Computer-Based Approach

    ERIC Educational Resources Information Center

    Xu, Jing; Bull, Susan

    2010-01-01

    Despite holding advanced language qualifications, many overseas students studying at English-speaking universities still have difficulties in formulating grammatically correct sentences. This article introduces an "independent open learner model" for advanced second language speakers of English, which confronts students with the state of their…

  6. Computational Benefits Using an Advanced Concatenation Scheme Based on Reduced Order Models for RF Structures

    NASA Astrophysics Data System (ADS)

    Heller, Johann; Flisgen, Thomas; van Rienen, Ursula

    The computation of electromagnetic fields and parameters derived thereof for lossless radio frequency (RF) structures filled with isotropic media is an important task for the design and operation of particle accelerators. Unfortunately, these computations are often highly demanding with regard to computational effort. The entire computational demand of the problem can be reduced using decomposition schemes in order to solve the field problems on standard workstations. This paper presents one of the first detailed comparisons between the recently proposed state-space concatenation approach (SSC) and a direct computation for an accelerator cavity with coupler-elements that break the rotational symmetry.

  7. Use Of The SYSCAP 2.5 Computer Analysis Program For Advanced Optical System Design And Analysis

    NASA Astrophysics Data System (ADS)

    Kleiner, C. T.

    1983-10-01

    The successful development of various electro-optical systems is highly dependent on precise electronic circuit design which must account for possible parameter drift in the various piece parts. The utilization of a comprehensive computer analysis program (SYSCAP) provides the electro-optical system designer and electro-optical management organization with a well-structured tool for a comprehensive system analysis'. As a result, the techniques described in this paper can be readily used by the electro-optical design community. An improved version of the SYSCAP computer program (version 2.5) is presented which inncludes the following new advances: (1) the introduction of a standard macro library that permits call-up of proven mathematical models for system modeling and simulation, (2) the introduction of improved semiconductor models for bipolar junction transistors and p-n junctions, (3) multifunction modeling capability to link signals with very high speed electronic circuit models, (4) high resolution computer graphics (both interactive and batch process) for display and permanent records, and (5) compatibility and interface with ad-vanced engineering work stations. This 2.5* version of the present SYSCAP 2 computer analysis program will be available for use through the Control Data Corporation world-wide Cybernet system in 1983*. This paper provides an overview of SYSCAP modeling and simulation capabilities.

  8. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    SciTech Connect

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC

    2009-06-19

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.

  9. Recent Advances in Photonic Devices for Optical Computing and the Role of Nonlinear Optics-Part II

    NASA Technical Reports Server (NTRS)

    Abdeldayem, Hossin; Frazier, Donald O.; Witherow, William K.; Banks, Curtis E.; Paley, Mark S.

    2007-01-01

    The twentieth century has been the era of semiconductor materials and electronic technology while this millennium is expected to be the age of photonic materials and all-optical technology. Optical technology has led to countless optical devices that have become indispensable in our daily lives in storage area networks, parallel processing, optical switches, all-optical data networks, holographic storage devices, and biometric devices at airports. This chapters intends to bring some awareness to the state-of-the-art of optical technologies, which have potential for optical computing and demonstrate the role of nonlinear optics in many of these components. Our intent, in this Chapter, is to present an overview of the current status of optical computing, and a brief evaluation of the recent advances and performance of the following key components necessary to build an optical computing system: all-optical logic gates, adders, optical processors, optical storage, holographic storage, optical interconnects, spatial light modulators and optical materials.

  10. Advances in Navy pharmacy information technology: accessing Micromedex via the Composite Healthcare Computer System and local area networks.

    PubMed

    Koerner, S D; Becker, F

    1999-07-01

    The pharmacy profession has long used technology to more effectively bring health care to the patient. Navy pharmacy has embraced technology advances in its daily operations, from computers to dispensing robots. Evolving from the traditional role of compounding and dispensing specialists, pharmacists are establishing themselves as vital team members in direct patient care: on the ward, in ambulatory clinics, in specialty clinics, and in other specialty patient care programs (e.g., smoking cessation). An important part of the evolution is the timely access to the most up-to-date information available. Micromedex, Inc. (Denver, Colorado), has developed a number of computer CD-ROM-based full-text pharmacy, toxicology, emergency medicine, and patient education products. Micromedex is a recognized leader with regard to total pharmaceutical information availability. This article discusses the implementation of Micromedex products within the established Composite Healthcare Computer System and the subsequent use by and effect on the international Navy pharmacy community.

  11. Parallel-META 2.0: Enhanced Metagenomic Data Analysis with Functional Annotation, High Performance Computing and Advanced Visualization

    PubMed Central

    Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale. PMID:24595159

  12. Parallel-META 2.0: enhanced metagenomic data analysis with functional annotation, high performance computing and advanced visualization.

    PubMed

    Su, Xiaoquan; Pan, Weihua; Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.

  13. An Overview of the Advanced CompuTational Software (ACTS)Collection

    SciTech Connect

    Drummond, Leroy A.; Marques, Osni A.

    2005-02-02

    The ACTS Collection brings together a number of general-purpose computational tools that were developed by independent research projects mostly funded and supported by the U.S. Department of Energy. These tools tackle a number of common computational issues found in many applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. In this article, we introduce the numerical tools in the collection and their functionalities, present a model for developing more complex computational applications on top of ACTS tools, and summarize applications that use these tools. Lastly, we present a vision of the ACTS project for deployment of the ACTS Collection by the computational sciences community.

  14. Final Report on DOE SciDAC project on Next Generation of Multi-Scale Quantum Simulation Software for Strongly Correlated Materials

    SciTech Connect

    Bai, Zhaojun; Scalettar, Richard; Savrasov, Sergey

    2012-07-01

    This report summarizes the accomplishments of the University of California Davis team which is part of a larger SciDAC collaboration including Mark Jarrell of Louisiana State University, Karen Tomko of the Ohio Supercomputer Center, and Eduardo F. D'Azevedo and Thomas A. Maier of Oak Ridge National Laboratory. In this report, we focus on the major UCD accomplishments. As the paper authorship list emphasizes, much of our work is the result of a tightly integrated effort; hence this compendium of UCD efforts of necessity contains some overlap with the work at our partner institutions.

  15. 78 FR 59927 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... Systems Biology [External Review Draft] AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... Molecular, Computational, and Systems Biology '' (EPA/600/R-13/214A). EPA is also announcing that Eastern..., Computational, and Systems Biology '' is available primarily via the Internet on the NCEA home page under...

  16. Projects to Measure the Effects of Computers on Campuses Make Major Advances, Leave Several Unsolved Problems.

    ERIC Educational Resources Information Center

    Turner, Judith Axler

    1989-01-01

    The most important results of computer networking experiments on campus may be increased use of technology, but the projects did not always take the directions anticipated. The projects also highlighted problems in social and educational systems that must be solved before computing can be comfortably integrated into higher education. (MSE)

  17. Advanced Scientific Computing Environment Team new scientific database management task. Progress report

    SciTech Connect

    Church, J.P.; Roberts, J.C.; Sims, R.N.; Smetana, A.O.; Westmoreland, B.W.

    1991-06-01

    The mission of the ASCENT Team is to continually keep pace with, evaluate, and select emerging computing technologies to define and implement prototypic scientific environments that maximize the ability of scientists and engineers to manage scientific data. These environments are to be implemented in a manner consistent with the site computing architecture and standards and NRTSC/SCS strategic plans for scientific computing. The major trends in computing hardware and software technology clearly indicate that the future ``computer`` will be a network environment that comprises supercomputers, graphics boxes, mainframes, clusters, workstations, terminals, and microcomputers. This ``network computer`` will have an architecturally transparent operating system allowing the applications code to run on any box supplying the required computing resources. The environment will include a distributed database and database managing system(s) that permits use of relational, hierarchical, object oriented, GIS, et al, databases. To reach this goal requires a stepwise progression from the present assemblage of monolithic applications codes running on disparate hardware platforms and operating systems. The first steps include converting from the existing JOSHUA system to a new J80 system that complies with modern language standards, development of a new J90 prototype to provide JOSHUA capabilities on Unix platforms, development of portable graphics tools to greatly facilitate preparation of input and interpretation of output; and extension of ``Jvv`` concepts and capabilities to distributed and/or parallel computing environments.

  18. BOOK REVIEW: Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming

    NASA Astrophysics Data System (ADS)

    Katsaounis, T. D.

    2005-02-01

    The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using

  19. Recent advances in the computation of vortex induced vibrations and the shielding effect

    SciTech Connect

    Vada, T.

    1995-12-31

    The paper presents some recent results obtained by the computer program VISFLO. This program computes viscous flow around circular cylinders by the vortex-in-cell method. The first problem studied is a case with two cylinders that can move independently of each other. Results are shown which indicates if the cylinders can be expected to collide. The other type of problem studied is a large array of fixed cylinders. The aim of the study is to compute the shielding effect in steady flow. Results are shown for both model scale and full scale cases. The scaling effect is shown to be very important.

  20. Computer architecture for efficient algorithmic executions in real-time systems: New technology for avionics systems and advanced space vehicles

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Youngblood, John N.; Saha, Aindam

    1987-01-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  1. Computer architecture for efficient algorithmic executions in real-time systems: new technology for avionics systems and advanced space vehicles

    SciTech Connect

    Carroll, C.C.; Youngblood, J.N.; Saha, A.

    1987-12-01

    Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.

  2. A Computational Future for Preventing HIV in Minority Communities: How Advanced Technology Can Improve Implementation of Effective Programs

    PubMed Central

    Brown, C Hendricks; Mohr, David C.; Gallo, Carlos G.; Mader, Christopher; Palinkas, Lawrence; Wingood, Gina; Prado, Guillermo; Kellam, Sheppard G.; Pantin, Hilda; Poduska, Jeanne; Gibbons, Robert; McManus, John; Ogihara, Mitsunori; Valente, Thomas; Wulczyn, Fred; Czaja, Sara; Sutcliffe, Geoff; Villamar, Juan; Jacobs, Christopher

    2013-01-01

    African Americans and Hispanics in the U.S. have much higher rates of HIV than non-minorities. There is now strong evidence that a range of behavioral interventions are efficacious in reducing sexual risk behavior in these populations. While a handful of these programs are just beginning to be disseminated widely, we still have not implemented effective programs to a level that would reduce the population incidence of HIV for minorities. We propose that innovative approaches involving computational technologies be explored for their use in both developing new interventions as well as in supporting wide-scale implementation of effective behavioral interventions. Mobile technologies have a place in both of these activities. First, mobile technologies can be used in sensing contexts and interacting to the unique preferences and needs of individuals at times where intervention to reduce risk would be most impactful. Secondly, mobile technologies can be used to improve the delivery of interventions by facilitators and their agencies. Systems science methods, including social network analysis, agent based models, computational linguistics, intelligent data analysis, and systems and software engineering all have strategic roles that can bring about advances in HIV prevention in minority communities. Using an existing mobile technology for depression and three effective HIV prevention programs, we illustrate how eight areas in the intervention/implementation process can use innovative computational approaches to advance intervention adoption, fidelity, and sustainability. PMID:23673892

  3. A computational future for preventing HIV in minority communities: how advanced technology can improve implementation of effective programs.

    PubMed

    Brown, C Hendricks; Mohr, David C; Gallo, Carlos G; Mader, Christopher; Palinkas, Lawrence; Wingood, Gina; Prado, Guillermo; Kellam, Sheppard G; Pantin, Hilda; Poduska, Jeanne; Gibbons, Robert; McManus, John; Ogihara, Mitsunori; Valente, Thomas; Wulczyn, Fred; Czaja, Sara; Sutcliffe, Geoff; Villamar, Juan; Jacobs, Christopher

    2013-06-01

    African Americans and Hispanics in the United States have much higher rates of HIV than non-minorities. There is now strong evidence that a range of behavioral interventions are efficacious in reducing sexual risk behavior in these populations. Although a handful of these programs are just beginning to be disseminated widely, we still have not implemented effective programs to a level that would reduce the population incidence of HIV for minorities. We proposed that innovative approaches involving computational technologies be explored for their use in both developing new interventions and in supporting wide-scale implementation of effective behavioral interventions. Mobile technologies have a place in both of these activities. First, mobile technologies can be used in sensing contexts and interacting to the unique preferences and needs of individuals at times where intervention to reduce risk would be most impactful. Second, mobile technologies can be used to improve the delivery of interventions by facilitators and their agencies. Systems science methods including social network analysis, agent-based models, computational linguistics, intelligent data analysis, and systems and software engineering all have strategic roles that can bring about advances in HIV prevention in minority communities. Using an existing mobile technology for depression and 3 effective HIV prevention programs, we illustrated how 8 areas in the intervention/implementation process can use innovative computational approaches to advance intervention adoption, fidelity, and sustainability.

  4. A computational future for preventing HIV in minority communities: how advanced technology can improve implementation of effective programs.

    PubMed

    Brown, C Hendricks; Mohr, David C; Gallo, Carlos G; Mader, Christopher; Palinkas, Lawrence; Wingood, Gina; Prado, Guillermo; Kellam, Sheppard G; Pantin, Hilda; Poduska, Jeanne; Gibbons, Robert; McManus, John; Ogihara, Mitsunori; Valente, Thomas; Wulczyn, Fred; Czaja, Sara; Sutcliffe, Geoff; Villamar, Juan; Jacobs, Christopher

    2013-06-01

    African Americans and Hispanics in the United States have much higher rates of HIV than non-minorities. There is now strong evidence that a range of behavioral interventions are efficacious in reducing sexual risk behavior in these populations. Although a handful of these programs are just beginning to be disseminated widely, we still have not implemented effective programs to a level that would reduce the population incidence of HIV for minorities. We proposed that innovative approaches involving computational technologies be explored for their use in both developing new interventions and in supporting wide-scale implementation of effective behavioral interventions. Mobile technologies have a place in both of these activities. First, mobile technologies can be used in sensing contexts and interacting to the unique preferences and needs of individuals at times where intervention to reduce risk would be most impactful. Second, mobile technologies can be used to improve the delivery of interventions by facilitators and their agencies. Systems science methods including social network analysis, agent-based models, computational linguistics, intelligent data analysis, and systems and software engineering all have strategic roles that can bring about advances in HIV prevention in minority communities. Using an existing mobile technology for depression and 3 effective HIV prevention programs, we illustrated how 8 areas in the intervention/implementation process can use innovative computational approaches to advance intervention adoption, fidelity, and sustainability. PMID:23673892

  5. Verification, Validation and Credibility Assessment of a Computational Model of the Advanced Resistive Exercise Device (ARED)

    NASA Technical Reports Server (NTRS)

    Werner, C. R.; Humphreys, B. T.; Mulugeta, L.

    2014-01-01

    The Advanced Resistive Exercise Device (ARED) is the resistive exercise device used by astronauts on the International Space Station (ISS) to mitigate bone loss and muscle atrophy due to extended exposure to microgravity (micro g). The Digital Astronaut Project (DAP) has developed a multi-body dynamics model of biomechanics models for use in spaceflight exercise physiology research and operations. In an effort to advance model maturity and credibility of the ARED model, the DAP performed verification, validation and credibility (VV and C) assessment of the analyses of the model in accordance to NASA-STD-7009 'Standards for Models and Simulations'.

  6. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft, supplemental data

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1975-01-01

    Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.

  7. Ultrascale visualization capabilities for the ParaView/VTK framework

    SciTech Connect

    MORELAND, KENNETH; FABIAN, NATHAN

    2009-06-09

    The software is a set of technologies developed by the SciDAC Institute for Ultrascale Visualization in order to address the visualization needs for petascale computing and beyond. These technologies include improved I/O performance, simulation co-processing, advanced rendering capabilities, and specialized visualization techniques developed for SciDAC applications.

  8. Development of Advanced Computational Aeroelasticity Tools at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Bartels, R. E.

    2008-01-01

    NASA Langley Research Center has continued to develop its long standing computational tools to address new challenges in aircraft and launch vehicle design. This paper discusses the application and development of those computational aeroelastic tools. Four topic areas will be discussed: 1) Modeling structural and flow field nonlinearities; 2) Integrated and modular approaches to nonlinear multidisciplinary analysis; 3) Simulating flight dynamics of flexible vehicles; and 4) Applications that support both aeronautics and space exploration.

  9. Implementing computer-based testing in distance education for advanced practice nurses: lessons learned.

    PubMed

    Caudle, Patricia; Bigness, Joanne; Daniels, Judi; Gillmor-Kahn, Mickey; Knestrick, Joyce

    2011-01-01

    A distance education program utilized by graduate nursing students worldwide faces unique problems with testing. This article presents the results of a pilot study on the implementation of computer-based testing at the Frontier Nursing University. A detailed analysis of the evaluative survey completed by students in the pilot study revealed issues of hi-directional respect and trust between faculty and students and technological anxiety among students using computer-based testing. PMID:22029246

  10. Integrative Utilization of Microenvironments, Biomaterials and Computational Techniques for Advanced Tissue Engineering.

    PubMed

    Shamloo, Amir; Mohammadaliha, Negar; Mohseni, Mina

    2015-10-20

    This review aims to propose the integrative implementation of microfluidic devices, biomaterials, and computational methods that can lead to a significant progress in tissue engineering and regenerative medicine researches. Simultaneous implementation of multiple techniques can be very helpful in addressing biological processes. Providing controllable biochemical and biomechanical cues within artificial extracellular matrix similar to in vivo conditions is crucial in tissue engineering and regenerative medicine researches. Microfluidic devices provide precise spatial and temporal control over cell microenvironment. Moreover, generation of accurate and controllable spatial and temporal gradients of biochemical factors is attainable inside microdevices. Since biomaterials with tunable properties are a worthwhile option to construct artificial extracellular matrix, in vitro platforms that simultaneously utilize natural, synthetic, or engineered biomaterials inside microfluidic devices are phenomenally advantageous to experimental studies in the field of tissue engineering. Additionally, collaboration between experimental and computational methods is a useful way to predict and understand mechanisms responsible for complex biological phenomena. Computational results can be verified by using experimental platforms. Computational methods can also broaden the understanding of the mechanisms behind the biological phenomena observed during experiments. Furthermore, computational methods are powerful tools to optimize the fabrication of microfluidic devices and biomaterials with specific features. Here we present a succinct review of the benefits of microfluidic devices, biomaterial, and computational methods in the case of tissue engineering and regeneration medicine. Furthermore, some breakthroughs in biological phenomena including the neuronal axon development, cancerous cell migration and blood vessel formation via angiogenesis by virtue of the aforementioned approaches

  11. Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology (Final Report)

    EPA Science Inventory

    Cover of the Next Generation of Risk Assessment Final report This final report, "Next Generation Risk Assessment: Recent Advances in Molec...

  12. Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.

    2004-11-01

    The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.

  13. Advanced computational simulation for design and manufacturing of lightweight material components for automotive applications

    SciTech Connect

    Simunovic, S.; Aramayo, G.A.; Zacharia, T.; Toridis, T.G.; Bandak, F.; Ragland, C.L.

    1997-04-01

    Computational vehicle models for the analysis of lightweight material performance in automobiles have been developed through collaboration between Oak Ridge National Laboratory, the National Highway Transportation Safety Administration, and George Washington University. The vehicle models have been verified against experimental data obtained from vehicle collisions. The crashed vehicles were analyzed, and the main impact energy dissipation mechanisms were identified and characterized. Important structural parts were extracted and digitized and directly compared with simulation results. High-performance computing played a key role in the model development because it allowed for rapid computational simulations and model modifications. The deformation of the computational model shows a very good agreement with the experiments. This report documents the modifications made to the computational model and relates them to the observations and findings on the test vehicle. Procedural guidelines are also provided that the authors believe need to be followed to create realistic models of passenger vehicles that could be used to evaluate the performance of lightweight materials in automotive structural components.

  14. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 4: Advanced fan section aerodynamic analysis computer program user's manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1992-01-01

    The computer program user's manual for the ADPACAPES (Advanced Ducted Propfan Analysis Code-Average Passage Engine Simulation) program is included. The objective of the computer program is development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates at the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes meeting the requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. The efficiency of the solution procedure was shown to be the same as the original analysis.

  15. Using Computer-Assisted Argumentation Mapping to develop effective argumentation skills in high school advanced placement physics

    NASA Astrophysics Data System (ADS)

    Heglund, Brian

    Educators recognize the importance of reasoning ability for development of critical thinking skills, conceptual change, metacognition, and participation in 21st century society. There is a recognized need for students to improve their skills of argumentation, however, argumentation is not explicitly taught outside logic and philosophy---subjects that are not part of the K-12 curriculum. One potential way of supporting the development of argumentation skills in the K-12 context is through incorporating Computer-Assisted Argument Mapping to evaluate arguments. This quasi-experimental study tested the effects of such argument mapping software and was informed by the following two research questions: 1. To what extent does the collaborative use of Computer-Assisted Argumentation Mapping to evaluate competing theories influence the critical thinking skill of argument evaluation, metacognitive awareness, and conceptual knowledge acquisition in high school Advanced Placement physics, compared to the more traditional method of text tables that does not employ Computer-Assisted Argumentation Mapping? 2. What are the student perceptions of the pros and cons of argument evaluation in the high school Advanced Placement physics environment? This study examined changes in critical thinking skills, including argumentation evaluation skills, as well as metacognitive awareness and conceptual knowledge, in two groups: a treatment group using Computer-Assisted Argumentation Mapping to evaluate physics arguments, and a comparison group using text tables to evaluate physics arguments. Quantitative and qualitative methods for collecting and analyzing data were used to answer the research questions. Quantitative data indicated no significant difference between the experimental groups, and qualitative data suggested students perceived pros and cons of argument evaluation in the high school Advanced Placement physics environment, such as self-reported sense of improvement in argument

  16. Inspection of advanced computational lithography logic reticles using a 193-nm inspection system

    NASA Astrophysics Data System (ADS)

    Yu, Ching-Fang; Lin, Mei-Chun; Lai, Mei-Tsu; Hsu, Luke T. H.; Chin, Angus; Lee, S. C.; Yen, Anthony; Wang, Jim; Chen, Ellison; Wu, David; Broadbent, William H.; Huang, William; Zhu, Zinggang

    2010-09-01

    We report inspection results of early 22-nm logic reticles designed with both conventional and computational lithography methods. Inspection is performed using a state-of-the-art 193-nm reticle inspection system in the reticleplane inspection mode (RPI) where both rule-based sensitivity control (RSC) and a newer modelbased sensitivity control (MSC) method are tested. The evaluation includes defect detection performance using several special test reticles designed with both conventional and computational lithography methods; the reticles contain a variety of programmed critical defects which are measured based on wafer print impact. Also included are inspection results from several full-field product reticles designed with both conventional and computational lithography methods to determine if low nuisance-defect counts can be achieved. These early reticles are largely single-die and all inspections are performed in the die-to-database inspection mode only.

  17. High performance computing and communications: Advancing the frontiers of information technology

    SciTech Connect

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  18. Detecting nano-scale vibrations in rotating devices by using advanced computational methods.

    PubMed

    del Toro, Raúl M; Haber, Rodolfo E; Schmittdiel, Michael C

    2010-01-01

    This paper presents a computational method for detecting vibrations related to eccentricity in ultra precision rotation devices used for nano-scale manufacturing. The vibration is indirectly measured via a frequency domain analysis of the signal from a piezoelectric sensor attached to the stationary component of the rotating device. The algorithm searches for particular harmonic sequences associated with the eccentricity of the device rotation axis. The detected sequence is quantified and serves as input to a regression model that estimates the eccentricity. A case study presents the application of the computational algorithm during precision manufacturing processes.

  19. Recent computational advances in the identification of allosteric sites in proteins.

    PubMed

    Lu, Shaoyong; Huang, Wenkang; Zhang, Jian

    2014-10-01

    Allosteric modulators have the potential to fine-tune protein functional activity. Therefore, the targeting of allosteric sites, as a strategy in drug design, is gaining increasing attention. Currently, it is not trivial to find and characterize new allosteric sites by experimental approaches. Alternatively, computational approaches are useful in helping researchers analyze and select potential allosteric sites for drug discovery. Here, we review state-of-the-art computational approaches directed at predicting putative allosteric sites in proteins, along with examples of successes in identifying allosteric sites utilizing these methods. We also discuss the challenges in developing reliable methods for predicting allosteric sites and tactics to resolve demanding tasks. PMID:25107670

  20. Detecting nano-scale vibrations in rotating devices by using advanced computational methods.

    PubMed

    del Toro, Raúl M; Haber, Rodolfo E; Schmittdiel, Michael C

    2010-01-01

    This paper presents a computational method for detecting vibrations related to eccentricity in ultra precision rotation devices used for nano-scale manufacturing. The vibration is indirectly measured via a frequency domain analysis of the signal from a piezoelectric sensor attached to the stationary component of the rotating device. The algorithm searches for particular harmonic sequences associated with the eccentricity of the device rotation axis. The detected sequence is quantified and serves as input to a regression model that estimates the eccentricity. A case study presents the application of the computational algorithm during precision manufacturing processes. PMID:22399918

  1. Detecting Nano-Scale Vibrations in Rotating Devices by Using Advanced Computational Methods

    PubMed Central

    del Toro, Raúl M.; Haber, Rodolfo E.; Schmittdiel, Michael C.

    2010-01-01

    This paper presents a computational method for detecting vibrations related to eccentricity in ultra precision rotation devices used for nano-scale manufacturing. The vibration is indirectly measured via a frequency domain analysis of the signal from a piezoelectric sensor attached to the stationary component of the rotating device. The algorithm searches for particular harmonic sequences associated with the eccentricity of the device rotation axis. The detected sequence is quantified and serves as input to a regression model that estimates the eccentricity. A case study presents the application of the computational algorithm during precision manufacturing processes. PMID:22399918

  2. Development of Computational Approaches for Simulation and Advanced Controls for Hybrid Combustion-Gasification Chemical Looping

    SciTech Connect

    Joshi, Abhinaya; Lou, Xinsheng; Neuschaefer, Carl; Chaudry, Majid; Quinn, Joseph

    2012-07-31

    This document provides the results of the project through September 2009. The Phase I project has recently been extended from September 2009 to March 2011. The project extension will begin work on Chemical Looping (CL) Prototype modeling and advanced control design exploration in preparation for a scale-up phase. The results to date include: successful development of dual loop chemical looping process models and dynamic simulation software tools, development and test of several advanced control concepts and applications for Chemical Looping transport control and investigation of several sensor concepts and establishment of two feasible sensor candidates recommended for further prototype development and controls integration. There are three sections in this summary and conclusions. Section 1 presents the project scope and objectives. Section 2 highlights the detailed accomplishments by project task area. Section 3 provides conclusions to date and recommendations for future work.

  3. Structural analysis of advanced polymeric foams by means of high resolution X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Nacucchi, M.; De Pascalis, F.; Scatto, M.; Capodieci, L.; Albertoni, R.

    2016-06-01

    Advanced polymeric foams with enhanced thermal insulation and mechanical properties are used in a wide range of industrial applications. The properties of a foam strongly depend upon its cell structure. Traditionally, their microstructure has been studied using 2D imaging systems based on optical or electron microscopy, with the obvious disadvantage that only the surface of the sample can be analysed. To overcome this shortcoming, the adoption of X-ray micro-tomography imaging is here suggested to allow for a complete 3D, non-destructive analysis of advanced polymeric foams. Unlike metallic foams, the resolution of the reconstructed structural features is hampered by the low contrast in the images due to weak X-ray absorption in the polymer. In this work an advanced methodology based on high-resolution and low-contrast techniques is used to perform quantitative analyses on both closed and open cells foams. Local structural features of individual cells such as equivalent diameter, sphericity, anisotropy and orientation are statistically evaluated. In addition, thickness and length of the struts are determined, underlining the key role played by the achieved resolution. In perspective, the quantitative description of these structural features will be used to evaluate the results of in situ mechanical and thermal test on foam samples.

  4. Institute for Scientific Computing Research Annual Report: Fiscal Year 2004

    SciTech Connect

    Keyes, D E

    2005-02-07

    Large-scale scientific computation and all of the disciplines that support and help to validate it have been placed at the focus of Lawrence Livermore National Laboratory (LLNL) by the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) and the Scientific Discovery through Advanced Computing (SciDAC) initiative of the Office of Science of the Department of Energy (DOE). The maturation of computational simulation as a tool of scientific and engineering research is underscored in the November 2004 statement of the Secretary of Energy that, ''high performance computing is the backbone of the nation's science and technology enterprise''. LLNL operates several of the world's most powerful computers--including today's single most powerful--and has undertaken some of the largest and most compute-intensive simulations ever performed. Ultrascale simulation has been identified as one of the highest priorities in DOE's facilities planning for the next two decades. However, computers at architectural extremes are notoriously difficult to use efficiently. Furthermore, each successful terascale simulation only points out the need for much better ways of interacting with the resulting avalanche of data. Advances in scientific computing research have, therefore, never been more vital to LLNL's core missions than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, LLNL must engage researchers at many academic centers of excellence. In Fiscal Year 2004, the Institute for Scientific Computing Research (ISCR) served as one of LLNL's main bridges to the academic community with a program of collaborative subcontracts, visiting faculty, student internships, workshops, and an active seminar series. The ISCR identifies researchers from the academic community for computer science and computational science collaborations with LLNL and hosts them for short- and

  5. Dynamic Docking Test System (DDTS) active table computer program NASA Advanced Docking System (NADS)

    NASA Technical Reports Server (NTRS)

    Gates, R. M.; Jantz, R. E.

    1974-01-01

    A computer program was developed to describe the three-dimensional motion of the Dynamic Docking Test System active table. The input consists of inertia and geometry data, actuator structural data, forcing function data, hydraulics data, servo electronics data, and integration control data. The output consists of table responses, actuator bending responses, and actuator responses.

  6. 78 FR 68058 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... Systems Biology [External Review Draft] AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of... Systems Biology '' (EPA/600/R-13/214A). The original Federal Register notice announcing the public comment..., computational, and systems biology data can better inform risk assessment. This draft document is available...

  7. Exploring Interactive and Dynamic Simulations Using a Computer Algebra System in an Advanced Placement Chemistry Course

    ERIC Educational Resources Information Center

    Matsumoto, Paul S.

    2014-01-01

    The article describes the use of Mathematica, a computer algebra system (CAS), in a high school chemistry course. Mathematica was used to generate a graph, where a slider controls the value of parameter(s) in the equation; thus, students can visualize the effect of the parameter(s) on the behavior of the system. Also, Mathematica can show the…

  8. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  9. Computational support of the X-29A Advanced Technology Demonstrator flight experiment

    NASA Technical Reports Server (NTRS)

    Waggoner, E. G.; Bates, B. L.

    1989-01-01

    Issues and questions associated with the forward swept wing and closely coupled canard are addressed. The primary focus will be on research questions which must be addressed to obtain high quality ground and flight test data. These data will be used in conjunction with computational predictions to complement the analyses required to comprehensively understand the interacting technologies.

  10. The coupling of fluids, dynamics, and controls on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher

    1995-01-01

    This grant provided for the demonstration of coupled controls, body dynamics, and fluids computations in a workstation cluster environment; and an investigation of the impact of peer-peer communication on flow solver performance and robustness. The findings of these investigations were documented in the conference articles.The attached publication, 'Towards Distributed Fluids/Controls Simulations', documents the solution and scaling of the coupled Navier-Stokes, Euler rigid-body dynamics, and state feedback control equations for a two-dimensional canard-wing. The poor scaling shown was due to serialized grid connectivity computation and Ethernet bandwidth limits. The scaling of a peer-to-peer communication flow code on an IBM SP-2 was also shown. The scaling of the code on the switched fabric-linked nodes was good, with a 2.4 percent loss due to communication of intergrid boundary point information. The code performance on 30 worker nodes was 1.7 (mu)s/point/iteration, or a factor of three over a Cray C-90 head. The attached paper, 'Nonlinear Fluid Computations in a Distributed Environment', documents the effect of several computational rate enhancing methods on convergence. For the cases shown, the highest throughput was achieved using boundary updates at each step, with the manager process performing communication tasks only. Constrained domain decomposition of the implicit fluid equations did not degrade the convergence rate or final solution. The scaling of a coupled body/fluid dynamics problem on an Ethernet-linked cluster was also shown.

  11. Collaborative Learning: Cognitive and Computational Approaches. Advances in Learning and Instruction Series.

    ERIC Educational Resources Information Center

    Dillenbourg, Pierre, Ed.

    Intended to illustrate the benefits of collaboration between scientists from psychology and computer science, namely machine learning, this book contains the following chapters, most of which are co-authored by scholars from both sides: (1) "Introduction: What Do You Mean by 'Collaborative Learning'?" (Pierre Dillenbourg); (2) "Learning Together:…

  12. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    ERIC Educational Resources Information Center

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  13. Some research advances in computer graphics that will enhance applications to engineering design

    NASA Technical Reports Server (NTRS)

    Allan, J. J., III

    1975-01-01

    Research in man/machine interactions and graphics hardware/software that will enhance applications to engineering design was described. Research aspects of executive systems, command languages, and networking used in the computer applications laboratory are mentioned. Finally, a few areas where little or no research is being done were identified.

  14. Novel genotype-phenotype associations in human cancers enabled by advanced molecular platforms and computational analysis of whole slide images

    PubMed Central

    Cooper, Lee A.D.; Kong, Jun; Gutman, David A.; Dunn, William D.; Nalisnik, Michael; Brat, Daniel J.

    2014-01-01

    Technological advances in computing, imaging and genomics have created new opportunities for exploring relationships between histology, molecular events and clinical outcomes using quantitative methods. Slide scanning devices are now capable of rapidly producing massive digital image archives that capture histological details in high-resolution. Commensurate advances in computing and image analysis algorithms enable mining of archives to extract descriptions of histology, ranging from basic human annotations to automatic and precisely quantitative morphometric characterization of hundreds of millions of cells. These imaging capabilities represent a new dimension in tissue-based studies, and when combined with genomic and clinical endpoints, can be used to explore biologic characteristics of the tumor microenvironment and to discover new morphologic biomarkers of genetic alterations and patient outcomes. In this paper we review developments in quantitative imaging technology and illustrate how image features can be integrated with clinical and genomic data to investigate fundamental problems in cancer. Using motivating examples from the study of glioblastomas (GBMs), we demonstrate how public data from The Cancer Genome Atlas (TCGA) can serve as an open platform to conduct in silico tissue based studies that integrate existing data resources. We show how these approaches can be used to explore the relation of the tumor microenvironment to genomic alterations and gene expression patterns and to define nuclear morphometric features that are predictive of genetic alterations and clinical outcomes. Challenges, limitations and emerging opportunities in the area of quantitative imaging and integrative analyses are also discussed. PMID:25599536

  15. Advances in Computational Radiation Biophysics for Cancer Therapy: Simulating Nano-Scale Damage by Low-Energy Electrons

    NASA Astrophysics Data System (ADS)

    Kuncic, Zdenka

    Computational radiation biophysics is a rapidly growing area that is contributing, alongside new hardware technologies, to ongoing developments in cancer imaging and therapy. Recent advances in theoretical and computational modeling have enabled the simulation of discrete, event-by-event interactions of very low energy (≪ 100 eV) electrons with water in its liquid thermodynamic phase. This represents a significant advance in our ability to investigate the initial stages of radiation induced biological damage at the molecular level. Such studies are important for the development of novel cancer treatment strategies, an example of which is given by microbeam radiation therapy (MRT). Here, new results are shown demonstrating that when excitations and ionizations are resolved down to nano-scales, their distribution extends well outside the primary microbeam path, into regions that are not directly irradiated. This suggests that radiation dose alone is insufficient to fully quantify biological damage. These results also suggest that the radiation cross-fire may be an important clue to understanding the different observed responses of healthy cells and tumor cells to MRT.

  16. Advances in Computational Radiation Biophysics for Cancer Therapy: Simulating Nano-Scale Damage by Low-Energy Electrons

    NASA Astrophysics Data System (ADS)

    Kuncic, Zdenka

    2015-10-01

    Computational radiation biophysics is a rapidly growing area that is contributing, alongside new hardware technologies, to ongoing developments in cancer imaging and therapy. Recent advances in theoretical and computational modeling have enabled the simulation of discrete, event-by-event interactions of very low energy (≪ 100 eV) electrons with water in its liquid thermodynamic phase. This represents a significant advance in our ability to investigate the initial stages of radiation induced biological damage at the molecular level. Such studies are important for the development of novel cancer treatment strategies, an example of which is given by microbeam radiation therapy (MRT). Here, new results are shown demonstrating that when excitations and ionizations are resolved down to nano-scales, their distribution extends well outside the primary microbeam path, into regions that are not directly irradiated. This suggests that radiation dose alone is insufficient to fully quantify biological damage. These results also suggest that the radiation cross-fire may be an important clue to understanding the different observed responses of healthy cells and tumor cells to MRT.

  17. Use of Respiratory-Correlated Four-Dimensional Computed Tomography to Determine Acceptable Treatment Margins for Locally Advanced Pancreatic Adenocarcinoma

    SciTech Connect

    Goldstein, Seth D.; Ford, Eric C.; Duhon, Mario; McNutt, Todd; Wong, John; Herman, Joseph M.

    2010-02-01

    Purpose: Respiratory-induced excursions of locally advanced pancreatic adenocarcinoma could affect dose delivery. This study quantified tumor motion and evaluated standard treatment margins. Methods and Materials: Respiratory-correlated four-dimensional computed tomography images were obtained on 30 patients with locally advanced pancreatic adenocarcinoma; 15 of whom underwent repeat scanning before cone-down treatment. Treatment planning software was used to contour the gross tumor volume (GTV), bilateral kidneys, and biliary stent. Excursions were calculated according to the centroid of the contoured volumes. Results: The mean +- standard deviation GTV excursion in the superoinferior (SI) direction was 0.55 +- 0.23 cm; an expansion of 1.0 cm adequately accounted for the GTV motion in 97% of locally advanced pancreatic adenocarcinoma patients. Motion GTVs were generated and resulted in a 25% average volume increase compared with the static GTV. Of the 30 patients, 17 had biliary stents. The mean SI stent excursion was 0.84 +- 0.32 cm, significantly greater than the GTV motion. The xiphoid process moved an average of 0.35 +- 0.12 cm, significantly less than the GTV. The mean SI motion of the left and right kidneys was 0.65 +- 0.27 cm and 0.77 +- 0.30 cm, respectively. At repeat scanning, no significant changes were seen in the mean GTV size (p = .8) or excursion (p = .3). Conclusion: These data suggest that an asymmetric expansion of 1.0, 0.7, and 0.6 cm along the respective SI, anteroposterior, and medial-lateral directions is recommended if a respiratory-correlated four-dimensional computed tomography scan is not available to evaluate the tumor motion during treatment planning. Surrogates of tumor motion, such as biliary stents or external markers, should be used with caution.

  18. Recent advances in renal hypoxia: insights from bench experiments and computer simulations.

    PubMed

    Layton, Anita T

    2016-07-01

    The availability of oxygen in renal tissue is determined by the complex interactions among a host of processes, including renal blood flow, glomerular filtration, arterial-to-venous oxygen shunting, medullary architecture, Na(+) transport, and oxygen consumption. When this delicate balance is disrupted, the kidney may become susceptible to hypoxic injury. Indeed, renal hypoxia has been implicated as one of the major causes of acute kidney injury and chronic kidney diseases. This review highlights recent advances in our understanding of renal hypoxia; some of these studies were published in response to a recent Call for Papers of this journal: Renal Hypoxia. PMID:27147670

  19. Application of computational aeroacoustic methodologies to advanced propeller configurations - A review

    NASA Technical Reports Server (NTRS)

    Korkan, Kenneth D.; Eagleson, Lisa A.; Griffiths, Robert C.

    1991-01-01

    Current research in the area of advanced propeller configurations for performance and acoustics are briefly reviewed. Particular attention is given to the techniques of Lock and Theodorsen modified for use in the design of counterrotating propeller configurations; a numerical method known as SSTAGE, which is a Euler solver for the unducted fan concept; the NASPROP-E numerical analysis also based on a Euler solver and used to study the near acoustic fields for the SR series propfan configurations; and a counterrotating propeller test rig designed to obtain an experimental performance/acoustic data base for various propeller configurations.

  20. Advanced earth observation spacecraft computer-aided design software: Technical, user and programmer guide

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.; Krauze, L. D.

    1983-01-01

    The IDEAS computer of NASA is a tool for interactive preliminary design and analysis of LSS (Large Space System). Nine analysis modules were either modified or created. These modules include the capabilities of automatic model generation, model mass properties calculation, model area calculation, nonkinematic deployment modeling, rigid-body controls analysis, RF performance prediction, subsystem properties definition, and EOS science sensor selection. For each module, a section is provided that contains technical information, user instructions, and programmer documentation.

  1. Research in advanced formal theorem-proving techniques. [design and implementation of computer languages

    NASA Technical Reports Server (NTRS)

    Raphael, B.; Fikes, R.; Waldinger, R.

    1973-01-01

    The results are summarised of a project aimed at the design and implementation of computer languages to aid in expressing problem solving procedures in several areas of artificial intelligence including automatic programming, theorem proving, and robot planning. The principal results of the project were the design and implementation of two complete systems, QA4 and QLISP, and their preliminary experimental use. The various applications of both QA4 and QLISP are given.

  2. High-Throughput Computational Design of Advanced Functional Materials: Topological Insulators and Two-Dimensional Electron Gas Systems

    NASA Astrophysics Data System (ADS)

    Yang, Kesong

    As a rapidly growing area of materials science, high-throughput (HT) computational materials design is playing a crucial role in accelerating the discovery and development of novel functional materials. In this presentation, I will first introduce the strategy of HT computational materials design, and take the HT discovery of topological insulators (TIs) as a practical example to show the usage of such an approach. Topological insulators are one of the most studied classes of novel materials because of their great potential for applications ranging from spintronics to quantum computers. Here I will show that, by defining a reliable and accessible descriptor, which represents the topological robustness or feasibility of the candidate, and by searching the quantum materials repository aflowlib.org, we have automatically discovered 28 TIs (some of them already known) in five different symmetry families. Next, I will talk about our recent research work on the HT computational design of the perovskite-based two-dimensional electron gas (2DEG) systems. The 2DEG formed on the perovskite oxide heterostructure (HS) has potential applications in next-generation nanoelectronic devices. In order to achieve practical implementation of the 2DEG in the device design, desired physical properties such as high charge carrier density and mobility are necessary. Here I show that, using the same strategy with the HT discovery of TIs, by introducing a series of combinatorial descriptors, we have successfully identified a series of candidate 2DEG systems based on the perovskite oxides. This work provides another exemplar of applying HT computational design approach for the discovery of advanced functional materials.

  3. Towards Precision Medicine: Advances in Computational Approaches for the Analysis of Human Variants

    PubMed Central

    Peterson, Thomas A; Doughty, Emily; Kann, Maricel G

    2013-01-01

    Variations and similarities in our individual genomes are part of our history, our heritage, and our identity. Some human genomic variants are associated with common traits such as hair and eye color, while others are associated with susceptibility to disease or response to drug treatment. Identifying the human variations producing clinically relevant phenotypic changes is critical for providing accurate and personalized diagnosis, prognosis, and treatment for diseases. Furthermore, a better understanding of the molecular underpinning of disease can lead to development of new drug targets for precision medicine. Several resources have been designed for collecting and storing human genomic variations in highly structured, easily accessible databases. Unfortunately, a vast amount of information about these genetic variants and their functional and phenotypic associations is currently buried in the literature, only accessible by manual curation or sophisticated text mining technology to extract the relevant information. In addition, the low cost of sequencing technologies coupled with increasing computational power has enabled the development of numerous computational methodologies to predict the pathogenicity of human variants. This review provides a detailed comparison of current human variant resources, including HGMD, OMIM, ClinVar, and UniProt/Swiss-Prot, followed by an overview of the computational methods and techniques used to leverage the available data to predict novel deleterious variants. We expect these resources and tools to become the foundation for understanding the molecular details of genomic variants leading to disease, which in turn will enable the promise of precision medicine. PMID:23962656

  4. Application of advanced grid generation techniques for flow field computations about complex configurations

    NASA Technical Reports Server (NTRS)

    Kathong, Monchai; Tiwari, Surendra N.

    1988-01-01

    In the computation of flowfields about complex configurations, it is very difficult to construct a boundary-fitted coordinate system. An alternative approach is to use several grids at once, each of which is generated independently. This procedure is called the multiple grids or zonal grids approach; its applications are investigated. The method conservative providing conservation of fluxes at grid interfaces. The Euler equations are solved numerically on such grids for various configurations. The numerical scheme used is the finite-volume technique with a three-stage Runge-Kutta time integration. The code is vectorized and programmed to run on the CDC VPS-32 computer. Steady state solutions of the Euler equations are presented and discussed. The solutions include: low speed flow over a sphere, high speed flow over a slender body, supersonic flow through a duct, and supersonic internal/external flow interaction for an aircraft configuration at various angles of attack. The results demonstrate that the multiple grids approach along with the conservative interfacing is capable of computing the flows about the complex configurations where the use of a single grid system is not possible.

  5. Artificial intelligence in medicine and cardiac imaging: harnessing big data and advanced computing to provide personalized medical diagnosis and treatment.

    PubMed

    Dilsizian, Steven E; Siegel, Eliot L

    2014-01-01

    Although advances in information technology in the past decade have come in quantum leaps in nearly every aspect of our lives, they seem to be coming at a slower pace in the field of medicine. However, the implementation of electronic health records (EHR) in hospitals is increasing rapidly, accelerated by the meaningful use initiatives associated with the Center for Medicare & Medicaid Services EHR Incentive Programs. The transition to electronic medical records and availability of patient data has been associated with increases in the volume and complexity of patient information, as well as an increase in medical alerts, with resulting "alert fatigue" and increased expectations for rapid and accurate diagnosis and treatment. Unfortunately, these increased demands on health care providers create greater risk for diagnostic and therapeutic errors. In the near future, artificial intelligence (AI)/machine learning will likely assist physicians with differential diagnosis of disease, treatment options suggestions, and recommendations, and, in the case of medical imaging, with cues in image interpretation. Mining and advanced analysis of "big data" in health care provide the potential not only to perform "in silico" research but also to provide "real time" diagnostic and (potentially) therapeutic recommendations based on empirical data. "On demand" access to high-performance computing and large health care databases will support and sustain our ability to achieve personalized medicine. The IBM Jeopardy! Challenge, which pitted the best all-time human players against the Watson computer, captured the imagination of millions of people across the world and demonstrated the potential to apply AI approaches to a wide variety of subject matter, including medicine. The combination of AI, big data, and massively parallel computing offers the potential to create a revolutionary way of practicing evidence-based, personalized medicine.

  6. Advances in the Development and Application of Computational Methodologies for Structural Modeling of G-Protein Coupled Receptors

    PubMed Central

    Mobarec, Juan Carlos

    2009-01-01

    Background Despite the large amount of experimental data accumulated in the past decade on G-protein coupled receptor (GPCR) structure and function, understanding of the molecular mechanisms underlying GPCR signaling is still far from being complete, thus impairing the design of effective and selective pharmaceuticals. Objective Understanding of GPCR function has been challenged even further by more recent experimental evidence that several of these receptors are organized in the cell membrane as homo- or hetero-oligomers, and that they may exhibit unique pharmacological properties. Given the complexity of these new signaling systems, researcher’s efforts are turning increasingly to molecular modeling, bioinformatics and computational simulations for mechanistic insights of GPCR functional plasticity. Methods We review here current advances in the development and application of computational approaches to improve prediction of GPCR structure and dynamics, thus enhancing current understanding of GPCR signaling. Results/Conclusions Models resulting from use of these computational approaches further supported by experiments are expected to help elucidate the complex allosterism that propagates through GPCR complexes, ultimately aiming at successful structure-based rational drug design. PMID:19672320

  7. Computational Fluid Dynamic Analysis of the Posterior Airway Space After Maxillomandibular Advancement For Obstructive Sleep Apnea Syndrome

    PubMed Central

    Sittitavornwong, Somsak; Waite, Peter D.; Shih, Alan M.; Cheng, Gary C.; Koomullil, Roy; Ito, Yasushi; Cure, Joel K; Harding, Susan M.; Litaker, Mark

    2013-01-01

    Purpose Evaluate the soft tissue change of the upper airway after maxillomandibular advancement (MMA) by computational fluid dynamics (CFD). Materials and Methods Eight OSAS patients who required MMA were recruited into this study. All participants had pre- and post-operative computed tomography (CT) and underwent MMA by a single oral and maxillofacial surgeon. Upper airway CT data sets for these 8 participants were created with high-fidelity 3-D numerical models for computational fluid dynamics (CFD). The 3-D models were simulated and analyzed to study how changes in airway anatomy affects pressure effort required for normal breathing. Airway dimensions, skeletal changes, Apnea-Hypopnea Index (AHI), and pressure efforts of pre- and post-operative 3-D models were compared and correlations interpreted. Results After MMA, laminar and turbulent air flow was significantly decreased at every level of the airway. The cross-sectional areas at the soft palate and tongue base were significantly increased. Conclusions This study shows that MMA increases airway dimensions by the increasing the occipital base (Base) - pogonion (Pg) distance. An increase of the Base-Pg distance showed a significant correlation with an AHI improvement and a decreased pressure effort of the upper airway. Decreasing the pressure effort will decrease the breathing workload. This improves the condition of OSAS. PMID:23642544

  8. National Computational Infrastructure for Lattice Gauge Theory

    SciTech Connect

    Brower, Richard C.

    2014-04-15

    SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Gauge Theory, from March 15, 2011 through March 14, 2012. The objective of this project is to construct the software needed to study quantum chromodynamics (QCD), the theory of the strong interactions of sub-atomic physics, and other strongly coupled gauge field theories anticipated to be of importance in the energy regime made accessible by the Large Hadron Collider (LHC). It builds upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API) was developed that enables lattice gauge theorists to make effective use of a wide variety of massively parallel computers. This project serves the entire USQCD Collaboration, which consists of nearly all the high energy and nuclear physicists in the United States engaged in the numerical study of QCD and related strongly interacting quantum field theories. All software developed in it is publicly available, and can be downloaded from a link on the USQCD Collaboration web site, or directly from the github repositories with entrance linke http://usqcd-software.github.io

  9. Computational Advances in the Arctic Terrestrial Simulator: Modeling Permafrost Degradation in a Warming Arctic

    NASA Astrophysics Data System (ADS)

    Coon, E.; Berndt, M.; Garimella, R.; Moulton, J. D.; Manzini, G.; Painter, S. L.

    2013-12-01

    The terrestrial Arctic has been a net sink of carbon for thousands of years, but warming trends suggest this may change. As the terrestrial Arctic warms, degradation of the permafrost results in significant melting of the ice wedges that support low-centered polygonal ground. This leads to subsidence of the topography, inversion of the polygonal ground, and restructuring of drainage networks. The change in hydrology and vegetation that result from these processes is poorly understood. Predictive simulation of the fate of this carbon is critical for understanding feedback effects between the terrestrial Arctic and climate change. Simulation of this system at fine scales presents many challenges. Flow and energy equations are solved on both the surface and subsurface domains, and deformation of the soil subsurface must couple with both. Additional processes such as snow, evapo-transpiration, and biogeochemistry supplement this THMC model. While globally implicit coupling methods enable conservation of mass and energy on the combined domain, care must be taken to ensure conservation as the soil subsides and the mesh deforms. Uncertainty in both critical physics of each process model and in coupling to maintain accuracy between processes suggests the need for a versatile many-physics framework. This framework should allow swapping of both processes and constitutive relations, and enable easy numerical experimentation of coupling strategies. Deformation dictates the need for advanced discretizations which maintain accuracy and a mesh framework capable of calculating smooth deformation with remapped fields. And latent heat introduces strong nonlinearities, requiring robust solvers and an efficient globalization strategy. Here we discuss advances as implemented in the Arctic Terrestrial Simulator (ATS), a many-physics framework and collection of physics kernels based upon Amanzi. We demonstrate the deformation capability, conserving mass and energy while simulating soil

  10. The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation

    PubMed Central

    Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt

    2010-01-01

    Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.

  11. Advances in Human-Computer Interaction: Graphics and Animation Components for Interface Design

    NASA Astrophysics Data System (ADS)

    Cipolla Ficarra, Francisco V.; Nicol, Emma; Cipolla-Ficarra, Miguel; Richardson, Lucy

    We present an analysis of communicability methodology in graphics and animation components for interface design, called CAN (Communicability, Acceptability and Novelty). This methodology has been under development between 2005 and 2010, obtaining excellent results in cultural heritage, education and microcomputing contexts. In studies where there is a bi-directional interrelation between ergonomics, usability, user-centered design, software quality and the human-computer interaction. We also present the heuristic results about iconography and layout design in blogs and websites of the following countries: Spain, Italy, Portugal and France.

  12. Computer Aided Diagnosis for Confocal Laser Endomicroscopy in Advanced Colorectal Adenocarcinoma

    PubMed Central

    Ştefănescu, Daniela; Streba, Costin; Cârţână, Elena Tatiana; Săftoiu, Adrian; Gruionu, Gabriel; Gruionu, Lucian Gheorghe

    2016-01-01

    Introduction Confocal laser endomicroscopy (CLE) is becoming a popular method for optical biopsy of digestive mucosa for both diagnostic and therapeutic procedures. Computer aided diagnosis of CLE images, using image processing and fractal analysis can be used to quantify the histological structures in the CLE generated images. The aim of this study is to develop an automatic diagnosis algorithm of colorectal cancer (CRC), based on fractal analysis and neural network modeling of the CLE-generated colon mucosa images. Materials and Methods We retrospectively analyzed a series of 1035 artifact-free endomicroscopy images, obtained during CLE examinations from normal mucosa (356 images) and tumor regions (679 images). The images were processed using a computer aided diagnosis (CAD) medical imaging system in order to obtain an automatic diagnosis. The CAD application includes image reading and processing functions, a module for fractal analysis, grey-level co-occurrence matrix (GLCM) computation module, and a feature identification module based on the Marching Squares and linear interpolation methods. A two-layer neural network was trained to automatically interpret the imaging data and diagnose the pathological samples based on the fractal dimension and the characteristic features of the biological tissues. Results Normal colon mucosa is characterized by regular polyhedral crypt structures whereas malignant colon mucosa is characterized by irregular and interrupted crypts, which can be diagnosed by CAD. For this purpose, seven geometric parameters were defined for each image: fractal dimension, lacunarity, contrast correlation, energy, homogeneity, and feature number. Of the seven parameters only contrast, homogeneity and feature number were significantly different between normal and cancer samples. Next, a two-layer feed forward neural network was used to train and automatically diagnose the malignant samples, based on the seven parameters tested. The neural network

  13. Advanced Computational Thermal Fluid Physics (CTFP) and Its Assessment for Light Water Reactors and Supercritical Reactors

    SciTech Connect

    D.M. McEligot; K. G. Condie; G. E. McCreery; H. M. McIlroy; R. J. Pink; L.E. Hochreiter; J.D. Jackson; R.H. Pletcher; B.L. Smith; P. Vukoslavcevic; J.M. Wallace; J.Y. Yoo; J.S. Lee; S.T. Ro; S.O. Park

    2005-10-01

    Background: The ultimate goal of the study is the improvement of predictive methods for safety analyses and design of Generation IV reactor systems such as supercritical water reactors (SCWR) for higher efficiency, improved performance and operation, design simplification, enhanced safety and reduced waste and cost. The objective of this Korean / US / laboratory / university collaboration of coupled fundamental computational and experimental studies is to develop the supporting knowledge needed for improved predictive techniques for use in the technology development of Generation IV reactor concepts and their passive safety systems. The present study emphasizes SCWR concepts in the Generation IV program.

  14. Advanced feedback control of indoor air quality using real-time computational fluid dynamics

    SciTech Connect

    Ratnam, E.; Campbell, T.; Bradley, R.

    1998-10-01

    This paper describes the partial implementation of a novel method of controlling indoor air quality (IAQ) for critical applications. The proposed method uses a numerical modeling technique known as computational fluid dynamics (CFD) for modeling the effect of variable ventilation rates for intelligent and rapid control of air contamination in space. This paper describes how a CFD model is made to run in real time linked to a feedback control loop. The technique was simulated in a graphical programming language. The simulation results indicate that a quasi-transient potential flow CFD model is a viable technique for feedback control of IAQ, and it is currently being implemented in an experimental validation.

  15. Computer Aided Design of Advanced Turbine Airfoil Alloys for Industrial Gas Turbines in Coal Fired Environments

    SciTech Connect

    G.E. Fuchs

    2007-12-31

    Recent initiatives for fuel flexibility, increased efficiency and decreased emissions in power generating industrial gas turbines (IGT's), have highlighted the need for the development of techniques to produce large single crystal or columnar grained, directionally solidified Ni-base superalloy turbine blades and vanes. In order to address the technical difficulties of producing large single crystal components, a program has been initiated to, using computational materials science, better understand how alloy composition in potential IGT alloys and solidification conditions during processing, effect castability, defect formation and environmental resistance. This program will help to identify potential routes for the development of high strength, corrosion resistant airfoil/vane alloys, which would be a benefit to all IGT's, including small IGT's and even aerospace gas turbines. During the first year, collaboration with Siemens Power Corporation (SPC), Rolls-Royce, Howmet and Solar Turbines has identified and evaluated about 50 alloy compositions that are of interest for this potential application. In addition, alloy modifications to an existing alloy (CMSX-4) were also evaluated. Collaborating with SPC and using computational software at SPC to evaluate about 50 alloy compositions identified 5 candidate alloys for experimental evaluation. The results obtained from the experimentally determined phase transformation temperatures did not compare well to the calculated values in many cases. The effects of small additions of boundary strengtheners (i.e., C, B and N) to CMSX-4 were also examined. The calculated phase transformation temperatures were somewhat closer to the experimentally determined values than for the 5 candidate alloys, discussed above. The calculated partitioning coefficients were similar for all of the CMSX-4 alloys, similar to the experimentally determined segregation behavior. In general, it appears that computational materials science has become a

  16. TRAC-PF1: an advanced best-estimate computer program for pressurized water reactor analysis

    SciTech Connect

    Liles, D.R.; Mahaffy, J.H.

    1984-02-01

    The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos National Laboratory to provide advanced best-estimate predictions of postulated accidents in light water reactors. The TRAC-PF1 program provides this capability for pressurized water reactors and for many thermal-hydraulic experimental facilities. The code features either a one-dimensional or a three-dimensional treatment of the pressure vessel and its associated internals; a two-phase, two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field; flow-regime-dependent constitutive equation treatment; optional reflood tracking capability for both bottom flood and falling-film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions. This report describes the thermal-hydraulic models and the numerical solution methods used in the code. Detailed programming and user information also are provided.

  17. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  18. Development of Computational Capabilities to Predict the Corrosion Wastage of Boiler Tubes in Advanced Combustion Systems

    SciTech Connect

    Kung, Steven; Rapp, Robert

    2014-08-31

    A comprehensive corrosion research project consisting of pilot-scale combustion testing and long-term laboratory corrosion study has been successfully performed. A pilot-scale combustion facility available at Brigham Young University was selected and modified to enable burning of pulverized coals under the operating conditions typical for advanced coal-fired utility boilers. Eight United States (U.S.) coals were selected for this investigation, with the test conditions for all coals set to have the same heat input to the combustor. In addition, the air/fuel stoichiometric ratio was controlled so that staged combustion was established, with the stoichiometric ratio maintained at 0.85 in the burner zone and 1.15 in the burnout zone. The burner zone represented the lower furnace of utility boilers, while the burnout zone mimicked the upper furnace areas adjacent to the superheaters and reheaters. From this staged combustion, approximately 3% excess oxygen was attained in the combustion gas at the furnace outlet. During each of the pilot-scale combustion tests, extensive online measurements of the flue gas compositions were performed. In addition, deposit samples were collected at the same location for chemical analyses. Such extensive gas and deposit analyses enabled detailed characterization of the actual combustion environments existing at the lower furnace walls under reducing conditions and those adjacent to the superheaters and reheaters under oxidizing conditions in advanced U.S. coal-fired utility boilers. The gas and deposit compositions were then carefully simulated in a series of 1000-hour laboratory corrosion tests, in which the corrosion performances of different commercial candidate alloys and weld overlays were evaluated at various temperatures for advanced boiler systems. Results of this laboratory study led to significant improvement in understanding of the corrosion mechanisms operating on the furnace walls as well as superheaters and reheaters in

  19. Recent Advances in Computational Models for the Study of Protein-Peptide Interactions.

    PubMed

    Kilburg, D; Gallicchio, E

    2016-01-01

    We review computational models and software tools in current use for the study of protein-peptide interactions. Peptides and peptide derivatives are growing in interest as therapeutic agents to target protein-protein interactions. Protein-protein interactions are pervasive in biological systems and are responsible for the regulation of critical functions within the cell. Mutations or dysregulation of expression can alter the network of interactions among proteins and cause diseases such as cancer. Protein-protein binding interfaces, which are often large, shallow, and relatively feature-less, are difficult to target with small-molecule inhibitors. Peptide derivatives based on the binding motifs present in the target protein complex are increasingly drawing interest as superior alternatives to conventional small-molecule inhibitors. However, the design of peptide-based inhibitors also presents novel challenges. Peptides are more complex and more flexible than standard medicinal compounds. They also tend to form more extended and more complex interactions with their protein targets. Computational modeling is increasingly being employed to supplement synthetic and biochemical work to offer guidance and energetic and structural insights. In this review, we discuss recent in silico structure-based and physics-based approaches currently employed to model protein-peptide interactions with a few examples of their applications. PMID:27567483

  20. Advanced retrieval method in satellite remote sensing atmosphere: the technique of computed tomography

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Xun, Yulong

    1998-08-01

    Computed Tomography (CT) is a modern medical diagnostic technique in which x-ray transmission measurements at numerous angles through the human body are processed by computer to produce cross-sectional pictures of the body. This technique also has found applications in such diverse fields as materials testing, astronomy, microscopy, image processing and oceanography.In this paper, a modification of this technique, using emitted IR or microwave radiation instead of transmitted x-ray radiation, can be applied to satellite radiance measurements taken along the orbital track at various angles. The channels of IR sensors for the CT retrieval are selected from HITRAN Database, and analyzed by Eigen-value analysis. We discuss in detail the effect retrieval result of CT technique form projection-angle. Finally, using the balloon sounding data, the result of CT are compared with the result of conventional method. Because the advantage over conventional remote sensing methods is the additional information acquired by viewing a given point in the atmosphere at several angles as well as several frequencies. The results show that the temperature profiles by CT retrieval are better than the conventional method.

  1. Using a Phenomenological Computer Model to Investigate Advanced Combustion Trajectories in a CIDI Engine

    SciTech Connect

    Gao, Zhiming; Wagner, Robert M; Sluder, Scott; Daw, C Stuart; Green Jr, Johney Boyd

    2011-01-01

    This paper summarizes results from simulations of conventional, high-dilution, and high-efficiency clean combustion in a diesel engine based on a two-zone phenomenological model. The two-zone combustion model is derived from a previously published multi-zone model, but it has been further simplified to increase computational speed by a factor of over 100. The results demonstrate that this simplified model is still able to track key aspects of the combustion trajectory responsible for NOx and soot production. In particular, the two-zone model in combination with highly simplified global kinetics correctly predicts the importance of including oxygen mass fraction (in addition to equivalence ratio and temperature) in lowering emissions from high-efficiency clean combustion. The methodology also provides a convenient framework for extracting information directly from in-cylinder pressure measurements. This feature is likely to be useful for on-board combustion diagnostics and controls. Because of the possibility for simulating large numbers of engine cycles in a short time, models of this type can provide insight into multi-cycle and transient combustion behavior not readily accessible to more computationally intensive models. Also the representation of the combustion trajectory in 3D space corresponding to equivalence ratio, flame temperature, and oxygen fraction provides new insight into optimal combustion management.

  2. Large Advanced Space Systems (LASS) computer-aided design program additions

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.

    1982-01-01

    The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.

  3. Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.

  4. An advanced combustion research facility for validating computational fluid dynamics codes

    NASA Astrophysics Data System (ADS)

    Bullard, J. B.; Hurley, C. D.; Eccles, N. C.

    1991-12-01

    The Sector Combustion Rig (SCR), built to obtain experimental data which could be used to verify computational fluid dynamic programs and to investigate the formation and consumption of combustion products through a combustor, is described. This rig was designed to accommodate sectors of full size engine combustion chambers and to test them at real or simulated engine operating conditions. Changes made to improve the operating, measurement, and data handling capabilities of the rig as a result of experience from several years of operations are described together with some of the features which contribute to the uniqueness of the SCR. The SCR gas analysis system and instrumentation are described. Extracts from some results obtained during a recent program of tests on a Rolls-Royce RB211 combustor are given.

  5. Application of advanced computing techniques to the analysis and display of space science measurements

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Lapolla, M. V.; Horblit, B.

    1995-01-01

    A prototype system has been developed to aid the experimental space scientist in the display and analysis of spaceborne data acquired from direct measurement sensors in orbit. We explored the implementation of a rule-based environment for semi-automatic generation of visualizations that assist the domain scientist in exploring one's data. The goal has been to enable rapid generation of visualizations which enhance the scientist's ability to thoroughly mine his data. Transferring the task of visualization generation from the human programmer to the computer produced a rapid prototyping environment for visualizations. The visualization and analysis environment has been tested against a set of data obtained from the Hot Plasma Composition Experiment on the AMPTE/CCE satellite creating new visualizations which provided new insight into the data.

  6. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Kontos, K. B.; Janardan, B. A.; Gliebe, P. R.

    1996-01-01

    Recent experience using ANOPP to predict turbofan engine flyover noise suggests that it over-predicts overall EPNL by a significant amount. An improvement in this prediction method is desired for system optimization and assessment studies of advanced UHB engines. An assessment of the ANOPP fan inlet, fan exhaust, jet, combustor, and turbine noise prediction methods is made using static engine component noise data from the CF6-8OC2, E(3), and QCSEE turbofan engines. It is shown that the ANOPP prediction results are generally higher than the measured GE data, and that the inlet noise prediction method (Heidmann method) is the most significant source of this overprediction. Fan noise spectral comparisons show that improvements to the fan tone, broadband, and combination tone noise models are required to yield results that more closely simulate the GE data. Suggested changes that yield improved fan noise predictions but preserve the Heidmann model structure are identified and described. These changes are based on the sets of engine data mentioned, as well as some CFM56 engine data that was used to expand the combination tone noise database. It should be noted that the recommended changes are based on an analysis of engines that are limited to single stage fans with design tip relative Mach numbers greater than one.

  7. An advanced synthetic eddy method for the computation of aerofoil-turbulence interaction noise

    NASA Astrophysics Data System (ADS)

    Kim, Jae Wook; Haeri, Sina

    2015-04-01

    This paper presents an advanced method to synthetically generate flow turbulence via an inflow boundary condition particularly designed for three-dimensional aeroacoustic simulations. The proposed method is virtually free of spurious noise that might arise from the synthetic turbulence, which enables a direct calculation of propagated sound waves from the source mechanism. The present work stemmed from one of the latest outcomes of synthetic eddy method (SEM) derived from a well-defined vector potential function creating a divergence-free velocity field with correct convection speeds of eddies, which in theory suppresses pressure fluctuations. In this paper, a substantial extension of the SEM is introduced and systematically optimised to create a realistic turbulence field based on von Kármán velocity spectra. The optimised SEM is then combined with a well-established sponge-layer technique to quietly inject the turbulent eddies into the domain from the upstream boundary, which results in a sufficiently clean acoustic field. Major advantages in the present approach are: a) that genuinely three-dimensional turbulence is generated; b) that various ways of parametrisation can be created to control/characterise the randomly distributed eddies; and, c) that its numerical implementation is efficient as the size of domain section through which the turbulent eddies should be passing can be adjusted and minimised. The performance and reliability of the proposed SEM are demonstrated by a three-dimensional simulation of aerofoil-turbulence interaction noise.

  8. Advances in physiologic lung assessment via electron beam computed tomography (EBCT)

    NASA Astrophysics Data System (ADS)

    Hoffman, Eric A.

    1999-09-01

    Lung function has been evaluated in both health and disease states by techniques, such as pulmonary function tests, which generally study aggregate function. These decades old modalities have yielded a valuable understanding of global physiologic and pathophysiologic structure-to-function relationships. However, such approaches have reached their limits. They cannot meet the current and anticipated needs of new surgical and pharmaceutical treatments. 4-D CT can provide insights into regional lung function (ventilation and blood flow) and thus can provide information at an early stage of disease when intervention will have the greatest impact. Lung CT over the last decade has helped with further defining anatomic features in disease, but has lagged behind advances on the cellular and molecular front largely because of the failure to account for functional correlates to structural pathology. Commercially available CT scanners are now capable of volumetric data acquisition in a breath-hold and capable of multi-level slice acquisitions of the heart and lungs with a per slice scan aperture of 50 - 300 msec, allowing for regional blood flow measurements. Static, volumetric imaging of the lung is inadequate in that much of lung pathology is a dynamic phenomenon and, thus, is only detectable if the lung is imaged as air and blood are flowing. This paper review the methodologies and early physiologic findings associated with our measures of lung tissue properties coupled with regional ventilation and perfusion.

  9. Recent advances in computational predictions of NMR parameters for the structure elucidation of carbohydrates: methods and limitations.

    PubMed

    Toukach, Filip V; Ananikov, Valentine P

    2013-11-01

    All living systems are comprised of four fundamental classes of macromolecules--nucleic acids, proteins, lipids, and carbohydrates (glycans). Glycans play a unique role of joining three principal hierarchical levels of the living world: (1) the molecular level (pathogenic agents and vaccine recognition by the immune system, metabolic pathways involving saccharides that provide cells with energy, and energy accumulation via photosynthesis); (2) the nanoscale level (cell membrane mechanics, structural support of biomolecules, and the glycosylation of macromolecules); (3) the microscale and macroscale levels (polymeric materials, such as cellulose, starch, glycogen, and biomass). NMR spectroscopy is the most powerful research approach for getting insight into the solution structure and function of carbohydrates at all hierarchical levels, from monosaccharides to oligo- and polysaccharides. Recent progress in computational procedures has opened up novel opportunities to reveal the structural information available in the NMR spectra of saccharides and to advance our understanding of the corresponding biochemical processes. The ability to predict the molecular geometry and NMR parameters is crucial for the elucidation of carbohydrate structures. In the present paper, we review the major NMR spectrum simulation techniques with regard to chemical shifts, coupling constants, relaxation rates and nuclear Overhauser effect prediction applied to the three levels of glycomics. Outstanding development in the related fields of genomics and proteomics has clearly shown that it is the advancement of research tools (automated spectrum analysis, structure elucidation, synthesis, sequencing and amplification) that drives the large challenges in modern science. Combining NMR spectroscopy and the computational analysis of structural information encoded in the NMR spectra reveals a way to the automated elucidation of the structure of carbohydrates.

  10. Advanced Analytic Treatment and Efficient Computation of the Diffraction Integrals in the Extended Nijboer-Zernike Theory

    NASA Astrophysics Data System (ADS)

    van Haver, S.; Janssen, A. J. E. M.

    2013-07-01

    The computational methods for the diffraction integrals that occur in the Extended Nijboer-Zernike (ENZ-) approach to circular, aberrated, defocused optical systems are reviewed and updated. In the ENZ-approach, the Debye approximation of Rayleigh's integral for the through-focus, complex, point-spread function is evaluated in semi-analytic form. To this end, the generalized pupil function, comprising phase aberrations as well as amplitude non-uniformities, is assumed to be expanded into a series of Zernike circle polynomials, and the contribution of each of these Zernike terms to the diffraction integral is expressed in the form of a rapidly converging series (containing power functions and/or Bessel functions of various kinds). The procedure of expressing the through-focus point-spread function in terms of Zernike expansion coefficients of the pupil function can be reversed and has led to the ENZ-method of retrieval of pupil functions from measured through-focus (inte! nsity) point-spread functions. The review and update concern the computation for systems ranging from as basic as having low NA and small defocus parameter to high-NA systems, with vector fields and polarization, meant for imaging of extended objects into a multi-layered focal region. In the period 2002-2010, the evolution of the form of the diffraction integral (DI) was dictated by the agenda of the ENZ-team in which a next instance of the DI was handled by amending the computation scheme of the previous one. This has resulted into a variety of ad hoc measures, lack of transparency of the schemes, and sometimes prohibitively slow computer codes. It is the aim of the present paper to reconstruct the whole building of computation methods, using consistently more advanced mathematical tools. These tools are -explicit Zernike expansion of the focal factor in the DI, -Clebsch-Gordan coefficients for the omnipresent problem of linearizing products ofZernike circle polynomials, -recursions for Bessel

  11. Application of advanced computational codes in the design of an experiment for a supersonic throughflow fan rotor

    NASA Technical Reports Server (NTRS)

    Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.

    1987-01-01

    Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.

  12. Recent Advances in Computational Studies of Charge Exchange X-ray Emission

    NASA Astrophysics Data System (ADS)

    Cumbee, Renata

    2016-06-01

    Interest in astrophysical sources of charge exchange (CX) has grown since X-ray emission from comet Hyakutake was first observed, the origin of which is primarily due to CX processes between neutral species in the comet’s atmosphere and highly charged ions from the solar wind. More recent observations have shown that CX may have a significant contribution to the X-ray emission spectra of a wide variety of environments within our solar system including solar wind charge exchange (SWCX) with neutral gases in the heliosphere and in planetary atmospheres, as well as beyond the solar system in galaxy clusters, supernova remnants, and star forming galaxies.While the basic process of CX has been studied for many decades, the reliability of the existing data is not uniform, and the coverage of the astrophysically important projectile and target combinations and collisional velocities is insufficient. The need for reliable and robust CX X-ray emission models will only be amplified with the with the high resolution X-ray spectra expected from the soft X-ray imaging calorimeter spectrometer (SXS) onboard the Hitomi X-ray observatory. In this talk, I will discuss recent advances in theoretical CX cross sections and X-ray modeling with a focus on CX diagnostics. The need for experimental X-ray spectra and cross sections for benchmarking current theory will also be highlighted. This work was performed in collaboration with David Lyons, Patrick Mullen, David Schultz, Phillip Stancil, and Robin Shelton. Work at UGA was partially supported by NASA grant NNX09AC46G.

  13. The Spin Torque Lego - from spin torque nano-devices to advanced computing architectures

    NASA Astrophysics Data System (ADS)

    Grollier, Julie

    2013-03-01

    Spin transfer torque (STT), predicted in 1996, and first observed around 2000, brought spintronic devices to the realm of active elements. A whole class of new devices, based on the combined effects of STT for writing and Giant Magneto-Resistance or Tunnel Magneto-Resistance for reading has emerged. The second generation of MRAMs, based on spin torque writing : the STT-RAM, is under industrial development and should be out on the market in three years. But spin torque devices are not limited to binary memories. We will rapidly present how the spin torque effect also allows to implement non-linear nano-oscillators, spin-wave emitters, controlled stochastic devices and microwave nano-detectors. What is extremely interesting is that all these functionalities can be obtained using the same materials, the exact same stack, simply by changing the device geometry and its bias conditions. So these different devices can be seen as Lego bricks, each brick with its own functionality. During this talk, I will show how spin torque can be engineered to build new bricks, such as the Spintronic Memristor, an artificial magnetic nano-synapse. I will then give hints on how to assemble these bricks in order to build novel types of computing architectures, with a special focus on neuromorphic circuits. Financial support by the European Research Council Starting Grant NanoBrain (ERC 2010 Stg 259068) is acknowledged.

  14. Computation of wake/exhaust mixing downstream of advanced transport aircraft

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Teske, Milton E.; Bilanin, Alan J.

    1993-01-01

    The mixing of engine exhaust with the vortical wake of high speed aircraft operating in the stratosphere can play an important role in the formation of chemical products that deplete atmospheric ozone. An accurate analysis of this type of interaction is therefore necessary as a part of the assessment of the impact of proposed High Speed Civil Transport (HSCT) designs on atmospheric chemistry. This paper describes modifications to the parabolic Navier-Stokes flow field analysis in the UNIWAKE unified aircraft wake model to accommodate the computation of wake/exhaust mixing and the simulation of reacting flow. The present implementation uses a passive chemistry model in which the reacting species are convected and diffused by the fluid dynamic solution but in which the evolution of the species does not affect the flow field. The resulting analysis, UNIWAKE/PCHEM (Passive CHEMistry) has been applied to the analysis of wake/exhaust flows downstream of representative HSCT configurations. The major elements of the flow field model are described, as are the results of sample calculations illustrating the behavior of the thermal exhaust plume and the production of species important to the modeling of condensation in the wake. Appropriate steps for further development of the UNIWAKE/PCHEM model are also outlined.

  15. Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems

    NASA Astrophysics Data System (ADS)

    Martin, W. R.

    1993-01-01

    This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

  16. Advanced computational methods for nodal diffusion, Monte Carlo, and S[sub N] problems

    SciTech Connect

    Martin, W.R.

    1993-01-01

    This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. A alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

  17. Computer-assisted generation of individual training concepts for advanced education in manufacturing metrology

    NASA Astrophysics Data System (ADS)

    Werner, Teresa; Weckenmann, Albert

    2010-05-01

    Due to increasing requirements on the accuracy and reproducibility of measurement results together with a rapid development of novel technologies for the execution of measurements, there is a high demand for adequately qualified metrologists. Accordingly, a variety of training offers are provided by machine manufacturers, universities and other institutions. Yet, for an interested learner it is very difficult to define an optimal training schedule for his/her individual demands. Therefore, a computer-based assistance tool is developed to support a demand-responsive scheduling of training. Based on the difference between the actual and intended competence profile and under consideration of amending requirements, an optimally customized qualification concept is derived. For this, available training offers are categorized according to different dimensions: regarding contents of the course, but also intended target groups, focus of the imparted competences, implemented methods of learning and teaching, expected constraints for learning and necessary preknowledge. After completing a course, the achieved competences and the transferability of gathered knowledge are evaluated. Based on the results, recommendations for amending measures of learning are provided. Thus, a customized qualification for manufacturing metrology is facilitated, adapted to the specific needs and constraints of each individual learner.

  18. Advances in computed tomography and magnetic resonance imaging of hepatocellular carcinoma

    PubMed Central

    Hennedige, Tiffany; Venkatesh, Sudhakar K

    2016-01-01

    Hepatocellular carcinoma (HCC) is the most common primary liver cancer. Imaging is important for establishing a diagnosis of HCC and early diagnosis is imperative as several potentially curative treatments are available when HCC is small. Hepatocarcinogenesis occurs in a stepwise manner on a background of chronic liver disease or cirrhosis wherein multiple genes are altered resulting in a range of cirrhosis-associated nodules. This progression is related to increased cellularity, neovascularity and size of the nodule. An understanding of the stepwise progression may aid in early diagnosis. Dynamic and multiphase contrast-enhanced computed tomography and magnetic resonance imaging still form the cornerstone in the diagnosis of HCC. An overview of the current diagnostic standards of HCC in accordance to the more common practicing guidelines and their differences will be reviewed. Ancillary features contribute to diagnostic confidence and has been incorporated into the more recent Liver Imaging Reporting and Data System. The use of hepatocyte-specific contrast agents is increasing and gradually changing the standard of diagnosis of HCC; the most significant benefit being the lack of uptake in the hepatocyte phase in the earlier stages of HCC progression. An outline of supplementary techniques in the imaging of HCC will also be reviewed. PMID:26755871

  19. Advancing global marine biogeography research with open-source GIS software and cloud-computing

    USGS Publications Warehouse

    Fujioka, Ei; Vanden Berghe, Edward; Donnelly, Ben; Castillo, Julio; Cleary, Jesse; Holmes, Chris; McKnight, Sean; Halpin, patrick

    2012-01-01

    Across many scientific domains, the ability to aggregate disparate datasets enables more meaningful global analyses. Within marine biology, the Census of Marine Life served as the catalyst for such a global data aggregation effort. Under the Census framework, the Ocean Biogeographic Information System was established to coordinate an unprecedented aggregation of global marine biogeography data. The OBIS data system now contains 31.3 million observations, freely accessible through a geospatial portal. The challenges of storing, querying, disseminating, and mapping a global data collection of this complexity and magnitude are significant. In the face of declining performance and expanding feature requests, a redevelopment of the OBIS data system was undertaken. Following an Open Source philosophy, the OBIS technology stack was rebuilt using PostgreSQL, PostGIS, GeoServer and OpenLayers. This approach has markedly improved the performance and online user experience while maintaining a standards-compliant and interoperable framework. Due to the distributed nature of the project and increasing needs for storage, scalability and deployment flexibility, the entire hardware and software stack was built on a Cloud Computing environment. The flexibility of the platform, combined with the power of the application stack, enabled rapid re-development of the OBIS infrastructure, and ensured complete standards-compliance.

  20. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX).

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-06-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710

  1. Computer vision-based technologies and commercial best practices for the advancement of the motion imagery tradecraft

    NASA Astrophysics Data System (ADS)

    Phipps, Marja; Capel, David; Srinivasan, James

    2014-06-01

    Motion imagery capabilities within the Department of Defense/Intelligence Community (DoD/IC) have advanced significantly over the last decade, attempting to meet continuously growing data collection, video processing and analytical demands in operationally challenging environments. The motion imagery tradecraft has evolved accordingly, enabling teams of analysts to effectively exploit data and generate intelligence reports across multiple phases in structured Full Motion Video (FMV) Processing Exploitation and Dissemination (PED) cells. Yet now the operational requirements are drastically changing. The exponential growth in motion imagery data continues, but to this the community adds multi-INT data, interoperability with existing and emerging systems, expanded data access, nontraditional users, collaboration, automation, and support for ad hoc configurations beyond the current FMV PED cells. To break from the legacy system lifecycle, we look towards a technology application and commercial adoption model course which will meet these future Intelligence, Surveillance and Reconnaissance (ISR) challenges. In this paper, we explore the application of cutting edge computer vision technology to meet existing FMV PED shortfalls and address future capability gaps. For example, real-time georegistration services developed from computer-vision-based feature tracking, multiple-view geometry, and statistical methods allow the fusion of motion imagery with other georeferenced information sources - providing unparalleled situational awareness. We then describe how these motion imagery capabilities may be readily deployed in a dynamically integrated analytical environment; employing an extensible framework, leveraging scalable enterprise-wide infrastructure and following commercial best practices.

  2. National Computational Infrastructure for Lattice Gauge Theory

    SciTech Connect

    Reed, Daniel, A

    2008-05-30

    In this document we describe work done under the SciDAC-1 Project National Computerational Infrastructure for Lattice Gauge Theory. The objective of this project was to construct the computational infrastructure needed to study quantim chromodynamics (QCD). Nearly all high energy and nuclear physicists in the United States working on the numerical study of QCD are involved in the project, as are Brookhaven National Laboratory (BNL), Fermi National Accelerator Laboratory (FNAL), and Thomas Jefferson National Accelerator Facility (JLab). A list of the serior participants is given in Appendix A.2. The project includes the development of community software for the effective use of the terascale computers, and the research and development of commodity clusters optimized for the study of QCD. The software developed as part of this effort is pubicly available, and is being widely used by physicists in the United States and abroad. The prototype clusters built with SciDAC-1 fund have been used to test the software, and are available to lattice guage theorists in the United States on a peer reviewed basis.

  3. Advancing the detection of steady-state visual evoked potentials in brain–computer interfaces

    NASA Astrophysics Data System (ADS)

    Abu-Alqumsan, Mohammad; Peer, Angelika

    2016-06-01

    Objective. Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain–computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. Approach. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. Main results. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Significance. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.

  4. Advanced computational techniques for incompressible/compressible fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Kumar, Vinod

    2005-07-01

    Fluid-Structure Interaction (FSI) problems are of great importance to many fields of engineering and pose tremendous challenges to numerical analyst. This thesis addresses some of the hurdles faced for both 2D and 3D real life time-dependent FSI problems with particular emphasis on parachute systems. The techniques developed here would help improve the design of parachutes and are of direct relevance to several other FSI problems. The fluid system is solved using the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) finite element formulation for the Navier-Stokes equations of incompressible and compressible flows. The structural dynamics solver is based on a total Lagrangian finite element formulation. Newton-Raphson method is employed to linearize the otherwise nonlinear system resulting from the fluid and structure formulations. The fluid and structural systems are solved in decoupled fashion at each nonlinear iteration. While rigorous coupling methods are desirable for FSI simulations, the decoupled solution techniques provide sufficient convergence in the time-dependent problems considered here. In this thesis, common problems in the FSI simulations of parachutes are discussed and possible remedies for a few of them are presented. Further, the effects of the porosity model on the aerodynamic forces of round parachutes are analyzed. Techniques for solving compressible FSI problems are also discussed. Subsequently, a better stabilization technique is proposed to efficiently capture and accurately predict the shocks in supersonic flows. The numerical examples simulated here require high performance computing. Therefore, numerical tools using distributed memory supercomputers with message passing interface (MPI) libraries were developed.

  5. Developing advanced X-ray scattering methods combined with crystallography and computation.

    PubMed

    Perry, J Jefferson P; Tainer, John A

    2013-03-01

    The extensive use of small angle X-ray scattering (SAXS) over the last few years is rapidly providing new insights into protein interactions, complex formation and conformational states in solution. This SAXS methodology allows for detailed biophysical quantification of samples of interest. Initial analyses provide a judgment of sample quality, revealing the potential presence of aggregation, the overall extent of folding or disorder, the radius of gyration, maximum particle dimensions and oligomerization state. Structural characterizations include ab initio approaches from SAXS data alone, and when combined with previously determined crystal/NMR, atomistic modeling can further enhance structural solutions and assess validity. This combination can provide definitions of architectures, spatial organizations of protein domains within a complex, including those not determined by crystallography or NMR, as well as defining key conformational states of a protein interaction. SAXS is not generally constrained by macromolecule size, and the rapid collection of data in a 96-well plate format provides methods to screen sample conditions. This includes screening for co-factors, substrates, differing protein or nucleotide partners or small molecule inhibitors, to more fully characterize the variations within assembly states and key conformational changes. Such analyses may be useful for screening constructs and conditions to determine those most likely to promote crystal growth of a complex under study. Moreover, these high throughput structural determinations can be leveraged to define how polymorphisms affect assembly formations and activities. This is in addition to potentially providing architectural characterizations of complexes and interactions for systems biology-based research, and distinctions in assemblies and interactions in comparative genomics. Thus, SAXS combined with crystallography/NMR and computation provides a unique set of tools that should be considered

  6. Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Abu-Alqumsan, Mohammad; Peer, Angelika

    2016-06-01

    Objective. Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. Approach. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. Main results. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Significance. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.

  7. Developing advanced x-ray scattering methods combined with crystallography and computation

    PubMed Central

    Perry, J. Jefferson P.; Tainer, John A.

    2013-01-01

    The extensive use of small angle x-ray scattering (SAXS) over the last few years is rapidly providing new insights into protein interactions, complex formation and conformational states in solution. This SAXS methodology allows for detailed biophysical quantification of samples of interest. Initial analyses provide a judgment of sample quality, revealing the potential presence of aggregation, the overall extent of folding or disorder, the radius of gyration, maximum particle dimensions and oligomerization state. Structural characterizations include ab initio approaches from SAXS data alone, and when combined with previously determined crystal/NMR, atomistic modeling can further enhance structural solutions and assess validity. This combination can provide definitions of architectures, spatial organizations of protein domains within a complex, including those not determined by crystallography or NMR, as well as defining key conformational states of a protein interaction. SAXS is not generally constrained by macromolecule size, and the rapid collection of data in a 96-well plate format provides methods to screen sample conditions. This includes screening for co-factors, substrates, differing protein or nucleotide partners or small molecule inhibitors, to more fully characterize the variations within assembly states and key conformational changes. Such analyses may be useful for screening constructs and conditions to determine those most likely to promote crystal growth of a complex under study. Moreover, these high throughput structural determinations can be leveraged to define how polymorphisms affect assembly formations and activities. This is in addition to potentially providing architectural characterizations of complexes and interactions for systems biology-based research, and distinctions in assemblies and interactions in comparative genomics. Thus, SAXS combined with crystallography/NMR and computation provides a unique set of tools that should be considered

  8. Advanced Computational Approaches for Characterizing Stochastic Cellular Responses to Low Dose, Low Dose Rate Exposures

    SciTech Connect

    Scott, Bobby, R., Ph.D.

    2003-06-27

    OAK - B135 This project final report summarizes modeling research conducted in the U.S. Department of Energy (DOE), Low Dose Radiation Research Program at the Lovelace Respiratory Research Institute from October 1998 through June 2003. The modeling research described involves critically evaluating the validity of the linear nonthreshold (LNT) risk model as it relates to stochastic effects induced in cells by low doses of ionizing radiation and genotoxic chemicals. The LNT model plays a central role in low-dose risk assessment for humans. With the LNT model, any radiation (or genotoxic chemical) exposure is assumed to increase one¡¯s risk of cancer. Based on the LNT model, others have predicted tens of thousands of cancer deaths related to environmental exposure to radioactive material from nuclear accidents (e.g., Chernobyl) and fallout from nuclear weapons testing. Our research has focused on developing biologically based models that explain the shape of dose-response curves for low-dose radiation and genotoxic chemical-induced stochastic effects in cells. Understanding the shape of the dose-response curve for radiation and genotoxic chemical-induced stochastic effects in cells helps to better understand the shape of the dose-response curve for cancer induction in humans. We have used a modeling approach that facilitated model revisions over time, allowing for timely incorporation of new knowledge gained related to the biological basis for low-dose-induced stochastic effects in cells. Both deleterious (e.g., genomic instability, mutations, and neoplastic transformation) and protective (e.g., DNA repair and apoptosis) effects have been included in our modeling. Our most advanced model, NEOTRANS2, involves differing levels of genomic instability. Persistent genomic instability is presumed to be associated with nonspecific, nonlethal mutations and to increase both the risk for neoplastic transformation and for cancer occurrence. Our research results, based on

  9. Sensing with Advanced Computing Technology: Fin Field-Effect Transistors with High-k Gate Stack on Bulk Silicon.

    PubMed

    Rigante, Sara; Scarbolo, Paolo; Wipf, Mathias; Stoop, Ralph L; Bedner, Kristine; Buitrago, Elizabeth; Bazigos, Antonios; Bouvet, Didier; Calame, Michel; Schönenberger, Christian; Ionescu, Adrian M

    2015-05-26

    Field-effect transistors (FETs) form an established technology for sensing applications. However, recent advancements and use of high-performance multigate metal-oxide semiconductor FETs (double-gate, FinFET, trigate, gate-all-around) in computing technology, instead of bulk MOSFETs, raise new opportunities and questions about the most suitable device architectures for sensing integrated circuits. In this work, we propose pH and ion sensors exploiting FinFETs fabricated on bulk silicon by a fully CMOS compatible approach, as an alternative to the widely investigated silicon nanowires on silicon-on-insulator substrates. We also provide an analytical insight of the concept of sensitivity for the electronic integration of sensors. N-channel fully depleted FinFETs with critical dimensions on the order of 20 nm and HfO2 as a high-k gate insulator have been developed and characterized, showing excellent electrical properties, subthreshold swing, SS ∼ 70 mV/dec, and on-to-off current ratio, Ion/Ioff ∼ 10(6), at room temperature. The same FinFET architecture is validated as a highly sensitive, stable, and reproducible pH sensor. An intrinsic sensitivity close to the Nernst limit, S = 57 mV/pH, is achieved. The pH response in terms of output current reaches Sout = 60%. Long-term measurements have been performed over 4.5 days with a resulting drift in time δVth/δt = 0.10 mV/h. Finally, we show the capability to reproduce experimental data with an extended three-dimensional commercial finite element analysis simulator, in both dry and wet environments, which is useful for future advanced sensor design and optimization.

  10. Sensing with Advanced Computing Technology: Fin Field-Effect Transistors with High-k Gate Stack on Bulk Silicon.

    PubMed

    Rigante, Sara; Scarbolo, Paolo; Wipf, Mathias; Stoop, Ralph L; Bedner, Kristine; Buitrago, Elizabeth; Bazigos, Antonios; Bouvet, Didier; Calame, Michel; Schönenberger, Christian; Ionescu, Adrian M

    2015-05-26

    Field-effect transistors (FETs) form an established technology for sensing applications. However, recent advancements and use of high-performance multigate metal-oxide semiconductor FETs (double-gate, FinFET, trigate, gate-all-around) in computing technology, instead of bulk MOSFETs, raise new opportunities and questions about the most suitable device architectures for sensing integrated circuits. In this work, we propose pH and ion sensors exploiting FinFETs fabricated on bulk silicon by a fully CMOS compatible approach, as an alternative to the widely investigated silicon nanowires on silicon-on-insulator substrates. We also provide an analytical insight of the concept of sensitivity for the electronic integration of sensors. N-channel fully depleted FinFETs with critical dimensions on the order of 20 nm and HfO2 as a high-k gate insulator have been developed and characterized, showing excellent electrical properties, subthreshold swing, SS ∼ 70 mV/dec, and on-to-off current ratio, Ion/Ioff ∼ 10(6), at room temperature. The same FinFET architecture is validated as a highly sensitive, stable, and reproducible pH sensor. An intrinsic sensitivity close to the Nernst limit, S = 57 mV/pH, is achieved. The pH response in terms of output current reaches Sout = 60%. Long-term measurements have been performed over 4.5 days with a resulting drift in time δVth/δt = 0.10 mV/h. Finally, we show the capability to reproduce experimental data with an extended three-dimensional commercial finite element analysis simulator, in both dry and wet environments, which is useful for future advanced sensor design and optimization. PMID:25817336

  11. Fluid/Structure Interaction Computational Investigation of Blast-Wave Mitigation Efficacy of the Advanced Combat Helmet

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Bell, W. C.; Pandurangan, B.; Glomski, P. S.

    2011-08-01

    To combat the problem of traumatic brain injury (TBI), a signature injury of the current military conflicts, there is an urgent need to design head protection systems with superior blast/ballistic impact mitigation capabilities. Toward that end, the blast impact mitigation performance of an advanced combat helmet (ACH) head protection system equipped with polyurea suspension pads and subjected to two different blast peak pressure loadings has been investigated computationally. A fairly detailed (Lagrangian) finite-element model of a helmet/skull/brain assembly is first constructed and placed into an Eulerian air domain through which a single planar blast wave propagates. A combined Eulerian/Lagrangian transient nonlinear dynamics computational fluid/solid interaction analysis is next conducted in order to assess the extent of reduction in intra-cranial shock-wave ingress (responsible for TBI). This was done by comparing temporal evolutions of intra-cranial normal and shear stresses for the cases of an unprotected head and the helmet-protected head and by correlating these quantities with the three most common types of mild traumatic brain injury (mTBI), i.e., axonal damage, contusion, and subdural hemorrhage. The results obtained show that the ACH provides some level of protection against all investigated types of mTBI and that the level of protection increases somewhat with an increase in blast peak pressure. In order to rationalize the aforementioned findings, a shockwave propagation/reflection analysis is carried out for the unprotected head and helmet-protected head cases. The analysis qualitatively corroborated the results pertaining to the blast-mitigation efficacy of an ACH, but also suggested that there are additional shockwave energy dissipation phenomena which play an important role in the mechanical response of the unprotected/protected head to blast impact.

  12. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

    SciTech Connect

    Saffer, Shelley I.

    2014-12-01

    This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

  13. TAIGA: Twente Advanced Interactive Graphic Authoring System. A New Concept in Computer Assisted Learning (CAL) and Educational Research. Doc 88-18.

    ERIC Educational Resources Information Center

    Pilot, A.

    TAIGA (Twente Advanced Interactive Graphic Authoring system) is a system which can be used to develop instructional software. It is written in MS-PASCAL, and runs on computers that support MS-DOS. Designed to support the production of structured software, TAIGA has a hierarchical structure of three layers, each with a specific function, and each…

  14. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan : ASC software quality engineering practices Version 3.0.

    SciTech Connect

    Turgeon, Jennifer L.; Minana, Molly A.; Hackney, Patricia; Pilch, Martin M.

    2009-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and the software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.

  15. Advanced Computing for Manufacturing.

    ERIC Educational Resources Information Center

    Erisman, Albert M.; Neves, Kenneth W.

    1987-01-01

    Discusses ways that supercomputers are being used in the manufacturing industry, including the design and production of airplanes and automobiles. Describes problems that need to be solved in the next few years for supercomputers to assume a major role in industry. (TW)

  16. Development and validation of burnup dependent computational schemes for the analysis of assemblies with advanced lattice codes

    NASA Astrophysics Data System (ADS)

    Ramamoorthy, Karthikeyan

    The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant

  17. Final Report DOE Grant No. DE-FG03-01ER54617 Computer Modeling of Microturbulence and Macrostability Properties of Magnetically Confined Plasmas

    SciTech Connect

    Jean-Noel Leboeuf

    2004-03-04

    OAK-B135 We have made significant progress during the past grant period in several key areas of the UCLA and national Fusion Theory Program. This impressive body of work includes both fundamental and applied contributions to MHD and turbulence in DIII-D and Electric Tokamak plasmas, and also to Z-pinches, particularly with respect to the effect of flows on these phenomena. We have successfully carried out interpretive and predictive global gyrokinetic particle-in-cell calculations of DIII-D discharges. We have cemented our participation in the gyrokinetic PIC effort of the SciDAC Plasma Microturbulence Project through working membership in the Summit Gyrokinetic PIC Team. We have continued to teach advanced courses at UCLA pertaining to computational plasma physics and to foster interaction with students and junior researchers. We have in fact graduated 2 Ph. D. students during the past grant period. The research carried out during that time has resulted in many publications in the premier plasma physics and fusion energy sciences journals and in several invited oral communications at major conferences such as Sherwood, Transport Task Force (TTF), the annual meetings of the Division of Plasma Physics of the American Physical Society, of the European Physical Society, and the 2002 IAEA Fusion Energy Conference, FEC 2002. Many of these have been authored and co-authored with experimentalists at DIII-D.

  18. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    SciTech Connect

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  19. Integrating Remote Sensing Data, Hybrid-Cloud Computing, and Event Notifications for Advanced Rapid Imaging & Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

    2013-12-01

    Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.

  20. Tritium Dose Assessments with Regulatory and Advanced Computer Models for the Potential European ITER Site Vandellos (Spain)

    SciTech Connect

    Raskob, W.; Velarde, M.; Perlado, J.M

    2005-07-15

    Deterministic and probabilistic dose assessments for releases of tritium have been performed for the potential European ITER Site of Vandellos (Spain). Besides national regulatory models, internationally accepted computer codes such as NORMTRI (for normal conditions) and UFOTRI (for incidental/accidental conditions) were used for the calculations. The paper concentrates on releases of tritium in either HT or HTO form. Source terms from the ITER documentation (GSSR vol. IV and VII) have been used for the HT/HTO releases.The data base of NORMTRI/UFOTRI was adapted to the national regulatory prescriptions. This comprised in particular ingestion habits and dose conversion factors. Important for the calculations was also the selection of meteorological, demographic, nutritional and agricultural data. Meteorological data over a period of one year was used for the probabilistic calculations. Deterministic scenarios were selected to be as close as possible to other studies performed in the frame of ITER. Results of the assessments were early and chronic doses which have been evaluated for the Most Exposed Individual at particular distance bands from the release point.Of particular importance was the comparison between the regulatory and the advanced assessment models. Regulatory models for tritium are sometimes simplistic and are either too conservative or do not consider important processes which might lead to underestimation of the dose. This is for example the case with organically bound tritium which is often not considered in regulatory models but may dominate the dose from ingestion pathways. Therefore, this comparison provided the opportunity to evaluate the appropriateness of a national accepted tool. As the site of ITER was still to be defined, such a comparison was vital and might be also necessary for any other site to assure public confidence in the licensing procedure.

  1. Development and validation of burnup dependent computational schemes for the analysis of assemblies with advanced lattice codes

    NASA Astrophysics Data System (ADS)

    Ramamoorthy, Karthikeyan

    The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant

  2. Using an Advance Organizer to Improve Knowledge Application by Medical Students in Computer-Based Clinical Simulations.

    ERIC Educational Resources Information Center

    Krahn, Corrie G.; Blanchaer, Marcel C.

    1986-01-01

    This study investigated the efficacy of using the advance organizer as a device to improve medical students' understanding of a clinical case simulation on the microcomputer and to enhance performance on a posttest. Advance organizers were found to be effective and most consistent with Mayer's assimilation theory. (MBR)

  3. Advancing a distributed multi-scale computing framework for large-scale high-throughput discovery in materials science.

    PubMed

    Knap, J; Spear, C E; Borodin, O; Leiter, K W

    2015-10-30

    We describe the development of a large-scale high-throughput application for discovery in materials science. Our point of departure is a computational framework for distributed multi-scale computation. We augment the original framework with a specialized module whose role is to route evaluation requests needed by the high-throughput application to a collection of available computational resources. We evaluate the feasibility and performance of the resulting high-throughput computational framework by carrying out a high-throughput study of battery solvents. Our results indicate that distributed multi-scale computing, by virtue of its adaptive nature, is particularly well-suited for building high-throughput applications.

  4. Computational Aerodynamic Simulations of an 840 ft/sec Tip Speed Advanced Ducted Propulsor Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of an 840 ft/sec tip speed, Advanced Ducted Propulsor fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, lownoise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15- foot Low Speed Wind Tunnel at the NASA Glenn Research Center, resulting in quality, detailed aerodynamic and acoustic measurement data. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating conditions simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, excluding a long core duct section downstream of the core inlet guide vane. As a result, only fan rotational speed and system bypass ratio, set by specifying static pressure downstream of the core inlet guide vane row, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. The computed blade row flow fields for all five fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive boundary layer separations or related secondary-flow problems. A few spanwise comparisons between

  5. Advances in three-dimensional coronary imaging and computational fluid dynamics: is virtual fractional flow reserve more than just a pretty picture?

    PubMed

    Poon, Eric K W; Hayat, Umair; Thondapu, Vikas; Ooi, Andrew S H; Ul Haq, Muhammad Asrar; Moore, Stephen; Foin, Nicolas; Tu, Shengxian; Chin, Cheng; Monty, Jason P; Marusic, Ivan; Barlis, Peter

    2015-08-01

    Percutaneous coronary intervention (PCI) has shown a high success rate in the treatment of coronary artery disease. The decision to perform PCI often relies on the cardiologist's visual interpretation of coronary lesions during angiography. This has inherent limitations, particularly due to the low resolution and two-dimensional nature of angiography. State-of-the-art modalities such as three-dimensional quantitative coronary angiography, optical coherence tomography and invasive fractional flow reserve (FFR) may improve clinicians' understanding of both the anatomical and physiological importance of coronary lesions. While invasive FFR is the gold standard technique for assessment of the haemodynamic significance of coronary lesions, recent studies have explored a surrogate for FFR derived solely from three-dimensional reconstruction of the invasive angiogram, and therefore eliminating need for a pressure wire. Utilizing advanced computational fluid dynamics research, this virtual fractional flow reserve (vFFR) has demonstrated reasonable correlation with invasive measurements and remains an intense area of ongoing study. However, at present, several limitations and computational fluid dynamic assumptions may preclude vFFR from widespread clinical use. This review demonstrates the tight integration of advanced three-dimensional imaging techniques and vFFR in assessing coronary artery disease, reviews the advantages and disadvantages of such techniques and attempts to provide a glimpse of how such advances may benefit future clinical decision-making during PCI. PMID:26247271

  6. SciDAC - Center for Simulation of Wave Interactions with MHD -- General Atomics Support of ORNL Collaboration

    SciTech Connect

    Abla, G

    2012-11-09

    The Center for Simulation of Wave Interactions with Magnetohydrodynamics (SWIM) project is dedicated to conduct research on integrated multi-physics simulations. The Integrated Plasma Simulator (IPS) is a framework that was created by the SWIM team. It provides an integration infrastructure for loosely coupled component-based simulations by facilitating services for code execution coordination, computational resource management, data management, and inter-component communication. The IPS framework features improving resource utilization, implementing application-level fault tolerance, and support of the concurrent multi-tasking execution model. The General Atomics (GA) team worked closely with other team members on this contract, and conducted research in the areas of computational code monitoring, meta-data management, interactive visualization, and user interfaces. The original website to monitor SWIM activity was developed in the beginning of the project. Due to the amended requirements, the software was redesigned and a revision of the website was deployed into production in April of 2010. Throughout the duration of this project, the SWIM Monitoring Portal (http://swim.gat.com:8080/) has been a critical production tool for supporting the project's physics goals.

  7. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1: ASC software quality engineering practices, Version 2.0.

    SciTech Connect

    Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Minana, Molly A.; Hackney, Patricia; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2006-09-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  8. Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan. Part 2, Mappings for the ASC software quality engineering practices. Version 1.0.

    SciTech Connect

    Ellis, Molly A.; Heaphy, Robert; Sturtevant, Judith E.; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2005-01-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, 'ASCI Software Quality Engineering: Goals, Principles, and Guidelines'. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  9. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan part 2 mappings for the ASC software quality engineering practices, version 2.0.

    SciTech Connect

    Heaphy, Robert; Sturtevant, Judith E.; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Minana, Molly A.; Hackney, Patricia; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2006-09-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR001.3.2 and CPR001.3.6 and to a Department of Energy document, ''ASCI Software Quality Engineering: Goals, Principles, and Guidelines''. This document also identifies ASC management and software project teams' responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  10. Advances in time-domain electromagnetic simulation capabilities through the use of overset grids and massively parallel computing

    NASA Astrophysics Data System (ADS)

    Blake, Douglas Clifton

    A new methodology is presented for conducting numerical simulations of electromagnetic scattering and wave-propagation phenomena on massively parallel computing platforms. A process is constructed which is rooted in the Finite-Volume Time-Domain (FVTD) technique to create a simulation capability that is both versatile and practical. In terms of versatility, the method is platform independent, is easily modifiable, and is capable of solving a large number of problems with no alterations. In terms of practicality, the method is sophisticated enough to solve problems of engineering significance and is not limited to mere academic exercises. In order to achieve this capability, techniques are integrated from several scientific disciplines including computational fluid dynamics, computational electromagnetics, and parallel computing. The end result is the first FVTD solver capable of utilizing the highly flexible overset-gridding process in a distributed-memory computing environment. In the process of creating this capability, work is accomplished to conduct the first study designed to quantify the effects of domain-decomposition dimensionality on the parallel performance of hyperbolic partial differential equations solvers; to develop a new method of partitioning a computational domain comprised of overset grids; and to provide the first detailed assessment of the applicability of overset grids to the field of computational electromagnetics. Using these new methods and capabilities, results from a large number of wave propagation and scattering simulations are presented. The overset-grid FVTD algorithm is demonstrated to produce results of comparable accuracy to single-grid simulations while simultaneously shortening the grid-generation process and increasing the flexibility and utility of the FVTD technique. Furthermore, the new domain-decomposition approaches developed for overset grids are shown to be capable of producing partitions that are better load balanced and

  11. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  12. Advanced Technology Airfoil Research, volume 1, part 1. [conference on development of computational codes and test facilities

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.

  13. The Efficacy of Computer-Based Supplementary Phonics Programs for Advancing Reading Skills in At-Risk Elementary Students

    ERIC Educational Resources Information Center

    Macaruso, Paul; Hook, Pamela E.; McCabe, Robert

    2006-01-01

    In this study we examined the benefits of computer programs designed to supplement regular reading instruction in an urban public school system. The programs provide systematic exercises for mastering word-attack strategies. Our findings indicate that first graders who participated in the programs made significant reading gains over the school…

  14. Community-Service Learning and Computer-Mediated Advanced Composition: The Going to Class, Getting Online, and Giving Back Project.

    ERIC Educational Resources Information Center

    Regan, Alison E.; Zuern, John D.

    2000-01-01

    Notes that the Center for English Studies Technology (CEST) at the University of Hawai'i at Manoa has sponsored several projects that bring together literacy, computer-mediated communication, and community-service learning. Concludes that community-service learning projects can be used to further pedagogical goals for technology-intensive writing…

  15. Employing a Structured Interface to Advance Primary Students' Communicative Competence in a Text-Based Computer Mediated Environment

    ERIC Educational Resources Information Center

    Chiu, Chiung-Hui; Wu, Chiu-Yi; Hsieh, Sheng-Jieh; Cheng, Hsiao-Wei; Huang, Chung-Kai

    2013-01-01

    This study investigated whether a structured communication interface fosters primary students' communicative competence in a synchronous typewritten computer-mediated collaborative learning environment. The structured interface provided a set of predetermined utterance patterns for elementary students to use or imitate to develop communicative…

  16. Surgical orthodontic treatment for a patient with advanced periodontal disease: evaluation with electromyography and 3-dimensional cone-beam computed tomography.

    PubMed

    Nakajima, Kan; Yamaguchi, Tetsutaro; Maki, Koutaro

    2009-09-01

    We report here the case of a woman with Class III malocclusion and advanced periodontal disease who was treated with surgical orthodontic correction. Functional recovery after orthodontic treatment is often monitored by serial electromyography of the masticatory muscles, whereas 3-dimensional cone-beam computed tomography can provide detailed structural information about, for example, periodontal bone defects. However, it is unclear whether the information obtained via these methods is sufficient to determine the treatment goal. It might be useful to address this issue for patients with advanced periodontal disease because of much variability between patients in the determination of treatment goals. We used detailed information obtained by 3-dimensional cone-beam computed tomography to identify periodontal bone defects and set appropriate treatment goals for inclination of the incisors and mandibular surgery. Results for this patient included stable occlusion and improved facial esthetics. This case report illustrates the benefits of establishing treatment goals acceptable to the patient, based on precise 3-dimensional assessment of dentoalveolar bone, and by using masticatory muscle activity to monitor the stability of occlusion.

  17. Computer-based written emotional disclosure: the effects of advance or real-time guidance and moderation by Big 5 personality traits.

    PubMed

    Beyer, Jonathan A; Lumley, Mark A; Latsch, Deborah V; Oberleitner, Lindsay M S; Carty, Jennifer N; Radcliffe, Alison M

    2014-01-01

    Standard written emotional disclosure (WED) about stress, which is private and unguided, yields small health benefits. The effect of providing individualized guidance to writers may enhance WED, but has not been tested. This trial of computer-based WED compared two novel therapist-guided forms of WED - advance guidance (before sessions) and real-time guidance (during sessions, through instant messaging) - to both standard WED and control writing; it also tested Big 5 personality traits as moderators of guided WED. Young adult participants (n = 163) with unresolved stressful experiences were randomized to conditions, had three, 30-min computer-based writing sessions, and were reassessed six weeks later. Contrary to hypotheses, real-time guidance WED had poorer outcomes than the other conditions on several measures, and advance guidance WED also showed some poorer outcomes. Moderator analyses revealed that participants with low baseline agreeableness, low extraversion, or high conscientiousness had relatively poor responses to guidance. We conclude that providing guidance for WED, especially in real-time, may interfere with emotional processing of unresolved stress, particularly for people whose personalities have poor fit with this interactive form of WED.

  18. Research Advances: DNA Computing Targets West Nile Virus, Other Deadly Diseases, and Tic-Tac-Toe; Marijuana Component May Offer Hope for Alzheimer's Disease Treatment; New Wound Dressing May Lead to Maggot Therapy--Without the Maggots

    ERIC Educational Resources Information Center

    King, Angela G.

    2007-01-01

    This article presents three reports of research advances. The first report describes a deoxyribonucleic acid (DNA)-based computer that could lead to faster, more accurate tests for diagnosing West Nile Virus and bird flu. Representing the first "medium-scale integrated molecular circuit," it is the most powerful computing device of its type to…

  19. Three-dimensional registration of synchrotron radiation-based micro-computed tomography images with advanced laboratory micro-computed tomography data from murine kidney casts

    NASA Astrophysics Data System (ADS)

    Thalmann, Peter; Hieber, Simone E.; Schulz, Georg; Deyhle, Hans; Khimchenko, Anna; Kurtcuoglu, Vartan; Olgac, Ufuk; Marmaras, Anastasios; Kuo, Willy; Meyer, Eric P.; Beckmann, Felix; Herzen, Julia; Ehrbar, Stefanie; Müller, Bert

    2014-09-01

    Malfunction of oxygen regulation in kidney and liver may lead to the pathogenesis of chronic diseases. The underlying mechanisms are poorly understood. In kidney, it is hypothesized that renal gas shunting from arteries to veins eliminates excess oxygen. Such shunting is highly dependent on the structure of the renal vascular network. The vascular tree has so far not been quantified under maintenance of its connectivity as three-dimensional imaging of the vessel tree down to the smallest capillaries, which in mouse model are smaller than 5 μm in diameter, is a challenging task. An established protocol uses corrosion casts and applies synchrotron radiation-based micro-computed tomography (SRμCT), which provides the desired spatial resolution with the necessary contrast. However, SRμCT is expensive and beamtime access is limited. We show here that measurements with a phoenix nanotomrm (General Electric, Wunstorf, Germany) can provide comparable results to those obtained with SRμCT, except for regions with small vessel structures, where the signal-to-noise level was significantly reduced. For this purpose the nanotom®m measurement was compared with its corresponding measurement acquired at the beamline P05 at PETRA III at DESY, Hamburg, Germany.

  20. Current status and future prospects of using advanced computer-based methods to study bacterial colonial morphology.

    PubMed

    Bae, Euiwon; Kim, Huisung; Rajwa, Bartek; Thomas, John G; Robinson, J Paul

    2016-01-01

    Despite the advancement of recent molecular technologies, culturing is still considered the gold standard for microbial sample analysis. Here we review three different bacterial colony-based screening modalities that provide significant information beyond the simple shape and color of the colony. The plate imaging technique provides numeration and quantitative spectral reflectance information for each colony, while Raman spectroscopic analysis of bacteria colonies relates the Raman-shifted peaks to specific chemical bonding. Finally, the elastic-light-scatter technique provides a volumetric interaction of the whole colony through laser-bacteria interactions, instantly capturing the morphological traits of the colony and allowing quantitative classifications.

  1. FINAL REPORT DE-FG02-04ER41317 Advanced Computation and Chaotic Dynamics for Beams and Accelerators

    SciTech Connect

    Cary, John R

    2014-09-08

    During the year ending in August 2013, we continued to investigate the potential of photonic crystal (PhC) materials for acceleration purposes. We worked to characterize acceleration ability of simple PhC accelerator structures, as well as to characterize PhC materials to determine whether current fabrication techniques can meet the needs of future accelerating structures. We have also continued to design and optimize PhC accelerator structures, with the ultimate goal of finding a new kind of accelerator structure that could offer significant advantages over current RF acceleration technology. This design and optimization of these requires high performance computation, and we continue to work on methods to make such computation faster and more efficient.

  2. Leveraging On-premise and Public Cloud Computing to Enable Advanced Rapid Imaging & Analysis for Monitoring Hazards (Invited)

    NASA Astrophysics Data System (ADS)

    Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Moore, A. W.; Fielding, E. J.; Agram, P.; Manipon, G.; Simons, M.; Rosen, P. A.; Stough, T. M.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

    2013-12-01

    Many of the fundamental processes underlying hazards such as earthquakes and volcanoes are poorly understood. Hazard systems are difficult to replicate in lab environments, and so we need to observe them in 'natural laboratories'. The global coverage offered by satellite-based SAR missions, and rapidly expanding GPS networks can provide orders of magnitude more observations. These combined geodetic data products will enable greater understanding of processes leading up to, during, and after natural and man-made disasters. However, a science data system is needed that can efficiently monitor & analyze the voluminous data, and provide users the tools to access the data products. In the interpretation process from observations to decision-making, data from observations are first used to improve the understanding of the physical processes, which then lead to more informed knowledge. However the need for handling high data volumes and processing expertise are often bottlenecks to providing the data product streams needed for improved decision-making. To help address lower latency and high data volume needs for monitoring and response to globally distributed hazards, we leveraged a hybrid-cloud computing approach that utilizes a seamless mixture of an on-premise Eucalyptus-based cloud computing environment with public Amazon Web Services (AWS) cloud computing resources. We will present some findings on the automation of geodetic processing, use of hybrid-cloud computing to address on-premise resource constraint issues, scalability issues in processing latency and data movement, as well as data discovery, access, and integration for other tools for location analytics.

  3. @neurIST: infrastructure for advanced disease management through integration of heterogeneous data, computing, and complex processing services.

    PubMed

    Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven

    2010-11-01

    The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.

  4. Applications of Computer Vision for Assessing Quality of Agri-food Products: A Review of Recent Research Advances.

    PubMed

    Ma, Ji; Sun, Da-Wen; Qu, Jia-Huan; Liu, Dan; Pu, Hongbin; Gao, Wen-Hong; Zeng, Xin-An

    2016-01-01

    With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.

  5. TRAC-BF1/MOD1: An advanced best-estimate computer program for BWR accident analysis: User's guide

    SciTech Connect

    Rettig, W.H.; Wade, N.L. )

    1992-06-01

    The TRAC-BWR code development program at the Idaho National Engineering Laboratory has developed versions of the Transient Reactor Analysis Code (TRAC) for the US Nuclear Regulatory Commission and the public. The TRAC-BF1/MODI version of the computer code provides a best-estimate analysis capability for analyzing the full range of postulated accidents in boiling water reactor (BWR) systems and related facilities. This version provides a consistent and unified analysis capability for analyzing all areas of a large- or small-break loss-of-coolant accident (LOCA), beginning with the blowdown phase and continuing through heatup, reflood with quenching, and, finally, the refill phase of the accident. Also provided is a basic capability for the analysis of operational transients up to and including anticipated transients without scram (ATWS). The TRAC-BF1/MOD1 version produces results consistent with previous versions. Assessment calculations using the two TRAC-BFI versions show overall improvements in agreement with data and computation times as compared to earlier versions of the TRAC-BWR series of computer codes.

  6. TRAC-BF1/MOD1: An advanced best-estimate computer program for BWR accident analysis, Model description

    SciTech Connect

    Borkowski, J.A.; Wade, N.L.; Giles, M.M.; Rouhani, S.Z.; Shumway, R.W.; Singer, G.L.; Taylor, D.D.; Weaver, W.L. )

    1992-08-01

    The TRAC-BWR code development program at the Idaho National Engineering Laboratory has developed versions of the Transient Reactor Analysis Code (TRAC) for the US Nuclear Regulatory Commission and the public. The TRAC-BF1/MODl version of the computer code provides a best-estimate analysis capability for analyzing the full range of postulated accidents in boiling water reactor (BWR) systems and related facilities. This version provides a consistent and unified analysis capability for analyzing all areas of a large- or small-break loss-of-coolant accident (LOCA), beginning with the blowdown phase and continuing through heatup, reflood with quenching, and, finally, the refill phase of the accident. Also provided is a basic capability for the analysis of operational transients up to and including anticipated transients without scram (ATWS). The TRAC-BF1/MODI version produces results consistent with previous versions. Assessment calculations using the two TRAC-BF1 versions show overall improvements in agreement with data and computation times as compared to earlier versions of the TRAC-BWR series of computer codes.

  7. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    SciTech Connect

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  8. Simulating Subsurface Flow and Transport on Ultrascale Computers using PFLOTRAN

    SciTech Connect

    Mills, Richard T; Lu, Chuan; Hammond, Glenn; Lichtner, Peter

    2007-01-01

    We describe PFLOTRAN, a recently developed code for modeling multi-phase, multicomponent subsurface flow and reactive transport using massively parallel computers. PFLOTRAN is built on top of PETSc, the Portable, Extensible Toolkit for Scientific Computation. Leveraging PETSc has allowed us to develop--with a relatively modest investment in development effort--a code that exhibits excellent performance on the largest-scale supercomputers. Very significant enhancements to the code are planned during our SciDAC-2 project. Here we describe the current state of the code, present an example of its use on Jaguar, the Cray XT3/4 system at Oak Ridge National Laboratory consisting of 11706 dual-core Opteron processor nodes, and briefly outline our future plans for the code.

  9. Simulating subsurface flow and transport on ultrascale computers using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Tran Mills, Richard; Lu, Chuan; Lichtner, Peter C.; Hammond, Glenn E.

    2007-07-01

    We describe PFLOTRAN, a recently developed code for modeling multi-phase, multi-component subsurface flow and reactive transport using massively parallel computers. PFLOTRAN is built on top of PETSc, the Portable, Extensible Toolkit for Scientific Computation. Leveraging PETSc has allowed us to develop—with a relatively modest investment in development effort—a code that exhibits excellent performance on the largest-scale supercomputers. Very significant enhancements to the code are planned during our SciDAC-2 project. Here we describe the current state of the code, present an example of its use on Jaguar, the Cray XT3/4 system at Oak Ridge National Laboratory consisting of 11706 dual-core Opteron processor nodes, and briefly outline our future plans for the code.

  10. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    NASA Astrophysics Data System (ADS)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization

  11. An integrated computer-program-system for the preliminary design of advanced hypersonic aircraft (PrADO-Hy)

    NASA Astrophysics Data System (ADS)

    Kossira, H.; Bardenhagen, A.; Heinze, W.

    The design program system PrADO-Hy (Preliminary Aircraft Design and Optimization - Hypersonic) for computer-aided conceptional hypersonic aircraft design, developed by the Institute of Aircraft Design and Structural Mechanics (IFL, TU Braunschweig), is introduced. The modular program simulates, controlled by a data management system, in its kernel the design process with the interactions between the different disciplines (aerodynamics, propulsion, structure, flight mechanics, etc.). The design process is superimposed by a multivariable optimization loop. This paper describes the organization of the PrADO system, the data management technique, and as an example of the program library the weight and balance module for the estimation of structural mass. The practical application and the capabilities of the program system are demonstrated by a design study of a TSTO (two-stage-to-orbit) vehicle, which should transfer a space payload of 3.3 tons to a low-earth-orbit (80 km/450 km). The computational results of some investigations will be presented.

  12. Hybrid deterministic and stochastic x-ray transport simulation for transmission computed tomography with advanced detector noise model

    NASA Astrophysics Data System (ADS)

    Popescu, Lucretiu M.

    2016-03-01

    We present a model for simulation of noisy X-ray computed tomography data sets. The model is made of two main components, a photon transport simulation component that generates the noiseless photon field incident on the detector, and a detector response model that takes as input the incident photon field parameters and given the X-ray source intensity and exposure time can generate noisy data sets, accordingly. The photon transport simulation component combines direct ray-tracing of polychromatic X-rays for calculation of transmitted data, with Monte Carlo simulation for calculation of the scattered-photon data. The Monte Carlo scatter simulation is accelerated by implementing particle splitting and importance sampling variance reduction techniques. The detector-incident photon field data are stored as energy expansion coefficients on a refined grid that covers the detector area. From these data the detector response model is able to generate noisy detector data realizations, by reconstituting the main parameters that describe each detector element response in statistical terms, including spatial correlations. The model is able to generate very fast, on the fly, CT data sets corresponding to different radiation doses, as well as detector response characteristics, facilitating data management in extensive optimization studies by reducing the computation time and storage space demands.

  13. Commodity-Based Computing Clusters at PPPL.

    NASA Astrophysics Data System (ADS)

    Wah, Darren; Davis, Steven L.; Johansson, Marques; Klasky, Scott; Tang, William; Valeo, Ernest

    2002-11-01

    In order to cost-effectively facilitate mid-scale serial and parallel computations and code development, a number of commodity-based clusters have been built at PPPL. A recent addition is the PETREL cluster, consisting of 100 dual-processor machines, both Intel and AMD, interconnected by a 100Mbit switch. Sixteen machines have an additional Myrinet 2000 interconnect. Also underway is the implementation of a Prototype Topical Computing Facility which will explore the effectiveness and scaling of cluster computing for larger scale fusion codes, specifically including those being developed under the SCIDAC auspices. This facility will consist of two parts: a 64 dual-processor node cluster, with high speed interconnect, and a 16 dual-processor node cluster, utilizing gigabit networking, built for the purpose of exploring grid-enabled computing. The initial grid explorations will be in collaboration with the Princeton University Institute for Computational Science and Engineering (PICSciE), where a 16 processor cluster dedicated to investigation of grid computing is being built. The initial objectives are to (1) grid-enable the GTC code and an MHD code, making use of MPICH-G2 and (2) implement grid-enabled interactive visualization using DXMPI and the Chromium API.

  14. Advances in Modeling the Generation of the Geomagnetic Field by the Using of Massively Parallel Computers and Profound Optimization

    NASA Technical Reports Server (NTRS)

    Clune, Thomas; Katz, Daniel S.; Glatzmaier, Gary A.

    2000-01-01

    At the Earth's surface, the magnetic field that is observed is similar to that that would be generated by a simple bar magnet running through the Earth's axis. This idea (permanent magnetism) was commonly believed a century ago. Because the temperature of the core is so high, permanent magnetism is not possible. Therefore, the magnetic field should decay, over tens of thousands of years. Since it does not, the field must be regenerating. Since the turn of the century, the idea that the core is molten iron which by moving generates a magnetic field arose. The set of equations to describe this are extremely non-linear and complex. Only in the last five to ten years have computers been able to solve these equations.

  15. Recent advances, and unresolved issues, in the application of computational modelling to the prediction of the biological effects of nanomaterials.

    PubMed

    Winkler, David A

    2016-05-15

    Nanomaterials research is one of the fastest growing contemporary research areas. The unprecedented properties of these materials have meant that they are being incorporated into products very quickly. Regulatory agencies are concerned they cannot assess the potential hazards of these materials adequately, as data on the biological properties of nanomaterials are still relatively limited and expensive to acquire. Computational modelling methods have much to offer in helping understand the mechanisms by which toxicity may occur, and in predicting the likelihood of adverse biological impacts of materials not yet tested experimentally. This paper reviews the progress these methods, particularly those QSAR-based, have made in understanding and predicting potentially adverse biological effects of nanomaterials, and also the limitations and pitfalls of these methods.

  16. A survey of advancements in nucleic acid-based logic gates and computing for applications in biotechnology and biomedicine.

    PubMed

    Wu, Cuichen; Wan, Shuo; Hou, Weijia; Zhang, Liqin; Xu, Jiehua; Cui, Cheng; Wang, Yanyue; Hu, Jun; Tan, Weihong

    2015-03-01

    Nucleic acid-based logic devices were first introduced in 1994. Since then, science has seen the emergence of new logic systems for mimicking mathematical functions, diagnosing disease and even imitating biological systems. The unique features of nucleic acids, such as facile and high-throughput synthesis, Watson-Crick complementary base pairing, and predictable structures, together with the aid of programming design, have led to the widespread applications of nucleic acids (NA) for logic gate and computing in biotechnology and biomedicine. In this feature article, the development of in vitro NA logic systems will be discussed, as well as the expansion of such systems using various input molecules for potential cellular, or even in vivo, applications.

  17. A Survey of Advancements in Nucleic Acid-based Logic Gates and Computing for Applications in Biotechnology and biomedicine

    PubMed Central

    Wu, Cuichen; Wan, Shuo; Hou, Weijia; Zhang, Liqin; Xu, Jiehua; Cui, Cheng; Wang, Yanyue; Hu, Jun

    2015-01-01

    Nucleic acid-based logic devices were first introduced in 1994. Since then, science has seen the emergence of new logic systems for mimicking mathematical functions, diagnosing disease and even imitating biological systems. The unique features of nucleic acids, such as facile and high-throughput synthesis, Watson-Crick complementary base pairing, and predictable structures, together with the aid of programming design, have led to the widespread applications of nucleic acids (NA) for logic gating and computing in biotechnology and biomedicine. In this feature article, the development of in vitro NA logic systems will be discussed, as well as the expansion of such systems using various input molecules for potential cellular, or even in vivo, applications. PMID:25597946

  18. Computing and monitoring potential of public spaces by shading analysis using 3d lidar data and advanced image analysis

    NASA Astrophysics Data System (ADS)

    Zwolinski, A.; Jarzemski, M.

    2015-04-01

    The paper regards specific context of public spaces in "shadow" of tall buildings located in European cities. Majority of tall buildings in European cities were built in last 15 years. Tall buildings appear mainly in city centres, directly at important public spaces being viable environment for inhabitants with variety of public functions (open spaces, green areas, recreation places, shops, services etc.). All these amenities and services are under direct impact of extensive shading coming from the tall buildings. The paper focuses on analyses and representation of impact of shading from tall buildings on various public spaces in cities using 3D city models. Computer environment of 3D city models in cityGML standard uses 3D LiDAR data as one of data types for definition of 3D cities. The structure of cityGML allows analytic applications using existing computer tools, as well as developing new techniques to estimate extent of shading coming from high-risers, affecting life in public spaces. These measurable shading parameters in specific time are crucial for proper functioning, viability and attractiveness of public spaces - finally it is extremely important for location of tall buildings at main public spaces in cities. The paper explores impact of shading from tall buildings in different spatial contexts on the background of using cityGML models based on core LIDAR data to support controlled urban development in sense of viable public spaces. The article is prepared within research project 2TaLL: Application of 3D Virtual City Models in Urban Analyses of Tall Buildings, realized as a part of Polish-Norway Grants.

  19. Airline return-on-investment model for technology evaluation. [computer program to measure economic value of advanced technology applied to passenger aircraft

    NASA Technical Reports Server (NTRS)

    1974-01-01

    This report presents the derivation, description, and operating instructions for a computer program (TEKVAL) which measures the economic value of advanced technology features applied to long range commercial passenger aircraft. The program consists of three modules; and airplane sizing routine, a direct operating cost routine, and an airline return-on-investment routine. These modules are linked such that they may be operated sequentially or individually, with one routine generating the input for the next or with the option of externally specifying the input for either of the economic routines. A very simple airplane sizing technique was previously developed, based on the Brequet range equation. For this program, that sizing technique has been greatly expanded and combined with the formerly separate DOC and ROI programs to produce TEKVAL.

  20. Final Progress Report: Collaborative Research: Decadal-to-Centennial Climate & Climate Change Studies with Enhanced Variable and Uniform Resolution GCMs Using Advanced Numerical Techniques

    SciTech Connect

    Fox-Rabinovitz, M; Cote, J

    2009-06-05

    The joint U.S-Canadian project has been devoted to: (a) decadal climate studies using developed state-of-the-art GCMs (General Circulation Models) with enhanced variable and uniform resolution; (b) development and implementation of advanced numerical techniques; (c) research in parallel computing and associated numerical methods; (d) atmospheric chemistry experiments related to climate issues; (e) validation of regional climate modeling strategies for nested- and stretched-grid models. The variable-resolution stretched-grid (SG) GCMs produce accurate and cost-efficient regional climate simulations with mesoscale resolution. The advantage of the stretched grid approach is that it allows us to preserve the high quality of both global and regional circulations while providing consistent interactions between global and regional scales and phenomena. The major accomplishment for the project has been the successful international SGMIP-1 and SGMIP-2 (Stretched-Grid Model Intercomparison Project, phase-1 and phase-2) based on this research developments and activities. The SGMIP provides unique high-resolution regional and global multi-model ensembles beneficial for regional climate modeling and broader modeling community. The U.S SGMIP simulations have been produced using SciDAC ORNL supercomputers. Collaborations with other international participants M. Deque (Meteo-France) and J. McGregor (CSIRO, Australia) and their centers and groups have been beneficial for the strong joint effort, especially for the SGMIP activities. The WMO/WCRP/WGNE endorsed the SGMIP activities in 2004-2008. This project reflects a trend in the modeling and broader communities to move towards regional and sub-regional assessments and applications important for the U.S. and Canadian public, business and policy decision makers, as well as for international collaborations on regional, and especially climate related issues.

  1. Validation of a Computational Model for the SLS Core Stage Oxygen Tank Diffuser Concept and the Low Profile Diffuser - An Advanced Development Design for the SLS

    NASA Technical Reports Server (NTRS)

    Brodnick, Jacob; Richardson, Brian; Ramachandran, Narayanan

    2015-01-01

    The Low Profile Diffuser (LPD) project originated as an award from the Marshall Space Flight Center (MSFC) Advanced Development (ADO) office to the Main Propulsion Systems Branch (ER22). The task was created to develop and test an LPD concept that could produce comparable performance to a larger, traditionally designed, ullage gas diffuser while occupying a smaller volume envelope. Historically, ullage gas diffusers have been large, bulky devices that occupy a significant portion of the propellant tank, decreasing the tank volume available for propellant. Ullage pressurization of spacecraft propellant tanks is required to prevent boil-off of cryogenic propellants and to provide a positive pressure for propellant extraction. To achieve this, ullage gas diffusers must slow hot, high-pressure gas entering a propellant tank from supersonic speeds to only a few meters per second. Decreasing the incoming gas velocity is typically accomplished through expansion to larger areas within the diffuser which has traditionally led to large diffuser lengths. The Fluid Dynamics Branch (ER42) developed and applied advanced Computational Fluid Dynamics (CFD) analysis methods in order to mature the LPD design from and initial concept to an optimized test prototype and to provide extremely accurate pre-test predictions of diffuser performance. Additionally, the diffuser concept for the Core Stage of the Space Launch System (SLS) was analyzed in a short amount of time to guide test data collection efforts of the qualification of the device. CFD analysis of the SLS diffuser design provided new insights into the functioning of the device and was qualitatively validated against hot wire anemometry of the exterior flow field. Rigorous data analysis of the measurements was performed on static and dynamic pressure data, data from two microphones, accelerometers and hot wire anemometry with automated traverse. Feasibility of the LPD concept and validation of the computational model were

  2. COMPASS, the COMmunity Petascale Project for Accelerator Science and Simulation, a broad computational accelerator physics initiative

    SciTech Connect

    J.R. Cary; P. Spentzouris; J. Amundson; L. McInnes; M. Borland; B. Mustapha; B. Norris; P. Ostroumov; Y. Wang; W. Fischer; A. Fedotov; I. Ben-Zvi; R. Ryne; E. Esarey; C. Geddes; J. Qiang; E. Ng; S. Li; C. Ng; R. Lee; L. Merminga; H. Wang; D.L. Bruhwiler; D. Dechow; P. Mullowney; P. Messmer; C. Nieter; S. Ovtchinnikov; K. Paul; P. Stoltz; D. Wade-Stein; W.B. Mori; V. Decyk; C.K. Huang; W. Lu; M. Tzoufras; F. Tsung; M. Zhou; G.R. Werner; T. Antonsen; T. Katsouleas

    2007-06-01

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  3. COMPASS, the COMmunity Petascale project for Accelerator Science and Simulation, a board computational accelerator physics initiative

    SciTech Connect

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; Wang, H.; Bruhwiler, D.L.; Dechow, D.; Mullowney, P.; Messmer, P.; Nieter, C.; Ovtchinnikov, S.; Paul, K.; Stoltz, P.; Wade-Stein, D.; Mori, W.B.; Decyk, V.; Huang, C.K.; Lu, W.; Tzoufras, M.; Tsung, F.; Zhou, M.; Werner, G.R.; Antonsen, T.; Katsouleas, T.; Morris, B.

    2007-07-16

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  4. COMPASS, the COMmunity Petascale Project for Accelerator Science And Simulation, a Broad Computational Accelerator Physics Initiative

    SciTech Connect

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Norris, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; /Jefferson Lab /Tech-X, Boulder /UCLA /Colorado U. /Maryland U. /Southern California U.

    2007-11-09

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.

  5. Wanapum Dam Advanced Hydro Turbine Upgrade Project: Part 2 - Evaluation of Fish Passage Test Results Using Computational Fluid Dynamics

    SciTech Connect

    Dresser, Thomas J.; Dotson, Curtis L.; Fisher, Richard K.; Graf, Michael J.; Richmond, Marshall C.; Rakowski, Cynthia L.; Carlson, Thomas J.; Mathur, Dilip; Heisey, Paul G.

    2007-10-10

    This paper, the second part of a 2 part paper, discusses the use of Computational Fluid Dynamics (CFD) to gain further insight into the results of fish release testing conducted to evaluate the modifications made to upgrade Unit 8 at Wanapum Dam. Part 1 discusses the testing procedures and fish passage survival. Grant PUD is working with Voith Siemens Hydro (VSH) and the Pacific Northwest National Laboratory (PNNL) of DOE and Normandeau Associates in this evaluation. VSH has prepared the geometry for the CFD analysis corresponding to the four operating conditions tested with Unit 9, and the 5 operating conditions tested with Unit 8. Both VSH and PNNL have conducting CFD simulations of the turbine intakes, stay vanes, wicket gates, turbine blades and draft tube of the units. Primary objectives of the analyses were: • determine estimates of where the inserted fish passed the turbine components • determine the characteristics of the flow field along the paths calculated for pressure, velocity gradients and acceleration associated with fish sized bodies • determine the velocity gradients at the structures where fish to structure interaction is predicted. • correlate the estimated fish location of passage with observed injuries • correlate the calculated pressure and acceleration with the information recorded with the sensor fish • utilize the results of the analysis to further interpret the results of the testing. This paper discusses the results of the CFD analyses made to assist the interpretation of the fish test results.

  6. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  7. Collective-Intelligence Recommender Systems: Advancing Computer Tailoring for Health Behavior Change Into the 21st Century

    PubMed Central

    Cutrona, Sarah L; Kinney, Rebecca L; Marlin, Benjamin M; Mazor, Kathleen M; Lemon, Stephenie C; Houston, Thomas K

    2016-01-01

    Background What is the next frontier for computer-tailored health communication (CTHC) research? In current CTHC systems, study designers who have expertise in behavioral theory and mapping theory into CTHC systems select the variables and develop the rules that specify how the content should be tailored, based on their knowledge of the targeted population, the literature, and health behavior theories. In collective-intelligence recommender systems (hereafter recommender systems) used by Web 2.0 companies (eg, Netflix and Amazon), machine learning algorithms combine user profiles and continuous feedback ratings of content (from themselves and other users) to empirically tailor content. Augmenting current theory-based CTHC with empirical recommender systems could be evaluated as the next frontier for CTHC. Objective The objective of our study was to uncover barriers and challenges to using recommender systems in health promotion. Methods We conducted a focused literature review, interviewed subject experts (n=8), and synthesized the results. Results We describe (1) limitations of current CTHC systems, (2) advantages of incorporating recommender systems to move CTHC forward, and (3) challenges to incorporating recommender systems into CTHC. Based on the evidence presented, we propose a future research agenda for CTHC systems. Conclusions We promote discussion of ways to move CTHC into the 21st century by incorporation of recommender systems. PMID:26952574

  8. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package. PMID:12086529

  9. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 5: Unsteady counterrotation ducted propfan analysis. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Adamczyk, John J.; Miller, Christopher J.; Arnone, Andrea; Swanson, Charles

    1993-01-01

    The primary objective of this study was the development of a time-marching three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict steady and unsteady compressible transonic flows about ducted and unducted propfan propulsion systems employing multiple blade rows. The computer codes resulting from this study are referred to as ADPAC-AOACR (Advanced Ducted Propfan Analysis Codes-Angle of Attack Coupled Row). This report is intended to serve as a computer program user's manual for the ADPAC-AOACR codes developed under Task 5 of NASA Contract NAS3-25270, Unsteady Counterrotating Ducted Propfan Analysis. The ADPAC-AOACR program is based on a flexible multiple blocked grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. For convenience, several standard mesh block structures are described for turbomachinery applications. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Numerical calculations are compared with experimental data for several test cases to demonstrate the utility of this approach for predicting the aerodynamics of modern turbomachinery configurations employing multiple blade rows.

  10. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 2: Unsteady ducted propfan analysis computer program users manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Bettner, James L.

    1991-01-01

    The primary objective of this study was the development of a time-dependent three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict unsteady compressible transonic flows about ducted and unducted propfan propulsion systems at angle of attack. The computer codes resulting from this study are referred to as Advanced Ducted Propfan Analysis Codes (ADPAC). This report is intended to serve as a computer program user's manual for the ADPAC developed under Task 2 of NASA Contract NAS3-25270, Unsteady Ducted Propfan Analysis. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. A time-accurate implicit residual smoothing operator was utilized for unsteady flow predictions. For unducted propfans, a single H-type grid was used to discretize each blade passage of the complete propeller. For ducted propfans, a coupled system of five grid blocks utilizing an embedded C-grid about the cowl leading edge was used to discretize each blade passage. Grid systems were generated by a combined algebraic/elliptic algorithm developed specifically for ducted propfans. Numerical calculations were compared with experimental data for both ducted and unducted propfan flows. The solution scheme demonstrated efficiency and accuracy comparable with other schemes of this class.

  11. Computing Life

    ERIC Educational Resources Information Center

    National Institute of General Medical Sciences (NIGMS), 2009

    2009-01-01

    Computer advances now let researchers quickly search through DNA sequences to find gene variations that could lead to disease, simulate how flu might spread through one's school, and design three-dimensional animations of molecules that rival any video game. By teaming computers and biology, scientists can answer new and old questions that could…

  12. Computational physics and applied mathematics capability review June 8-10, 2010 (Advance materials to committee members)

    SciTech Connect

    Lee, Stephen R

    2010-01-01

    Los Alamos National Laboratory will review its Computational Physics and Applied Mathematics (CPAM) capabilities in 2010. The goals of capability reviews are to assess the quality of science, technology, and engineering (STE) performed by the capability, evaluate the integration of this capability across the Laboratory and within the scientific community, examine the relevance of this capability to the Laboratory's programs, and provide advice on the current and future directions of this capability. This is the first such review for CPAM, which has a long and unique history at the laboratory, starting from the inception of the Laboratory in 1943. The CPAM capability covers an extremely broad technical area at Los Alamos, encompassing a wide array of disciplines, research topics, and organizations. A vast array of technical disciplines and activities are included in this capability, from general numerical modeling, to coupled mUlti-physics simulations, to detailed domain science activities in mathematics, methods, and algorithms. The CPAM capability involves over 12 different technical divisions and a majority of our programmatic and scientific activities. To make this large scope tractable, the CPAM capability is broken into the following six technical 'themes.' These themes represent technical slices through the CP AM capability and collect critical core competencies of the Laboratory, each of which contributes to the capability (and each of which is divided into multiple additional elements in the detailed descriptions of the themes in subsequent sections): (1) Computational Fluid Dynamics - This theme speaks to the vast array of scientific capabilities for the simulation of fluids under shocks, low-speed flow, and turbulent conditions - which are key, historical, and fundamental strengths of the laboratory; (2) Partial Differential Equations - The technical scope of this theme is the applied mathematics and numerical solution of partial differential equations

  13. Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Luo, J.

    1993-01-01

    Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD mean flow quantities are reasonably well predicted by various turbulence models. The ARSM yields better predictions for both the mean flow and the turbulence quantities than the NLSM and the k-epsilon (k = turbulent kinetic energy, epsilon = dissipation rate of k) model. The NLSM also shows slight improvement over the k-epsilon model. However, all the models fail to capture the recovery of the flow from strong curvature effects. The formulation for phi(sub ij,w) appears to be incorrect near the concave surface. The hybrid k-epsilon/ARSM, Chien's k-epsilon model, and Coakley's q-omega (q = the square root of k, omega = epsilon/k) model have also been employed to compute the aerodynamics and heat transfer of a transonic turbine cascade. The surface pressure distributions and the wake profiles are predicted well by all the models. The k-epsilon model and the k-epsilon/ARSM model provide better predictions of heat transfer than the q-omega model. The k-epsilon/ARSM solutions show significant differences in the predicted skin friction coefficients, heat transfer rates and the cascade performance parameters, as compared to the k-epsilon model. The k-epsilon/ARSM model appears to capture, qualitatively, the anisotropy associated with by-pass transition.

  14. Computational Models of Human Performance: Validation of Memory and Procedural Representation in Advanced Air/Ground Simulation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Labacqz, J. Victor (Technical Monitor)

    1997-01-01

    The Man-Machine Interaction Design and Analysis System (MIDAS) under joint U.S. Army and NASA cooperative is intended to assist designers of complex human/automation systems in successfully incorporating human performance capabilities and limitations into decision and action support systems. MIDAS is a computational representation of multiple human operators, selected perceptual, cognitive, and physical functions of those operators, and the physical/functional representation of the equipment with which they operate. MIDAS has been used as an integrated predictive framework for the investigation of human/machine systems, particularly in situations with high demands on the operators. We have extended the human performance models to include representation of both human operators and intelligent aiding systems in flight management, and air traffic service. The focus of this development is to predict human performance in response to aiding system developed to identify aircraft conflict and to assist in the shared authority for resolution. The demands of this application requires representation of many intelligent agents sharing world-models, coordinating action/intention, and cooperative scheduling of goals and action in an somewhat unpredictable world of operations. In recent applications to airborne systems development, MIDAS has demonstrated an ability to predict flight crew decision-making and procedural behavior when interacting with automated flight management systems and Air Traffic Control. In this paper, we describe two enhancements to MIDAS. The first involves the addition of working memory in the form of an articulatory buffer for verbal communication protocols and a visuo-spatial buffer for communications via digital datalink. The second enhancement is a representation of multiple operators working as a team. This enhanced model was used to predict the performance of human flight crews and their level of compliance with commercial aviation communication

  15. MO-E-18C-04: Advanced Computer Simulation and Visualization Tools for Enhanced Understanding of Core Medical Physics Concepts

    SciTech Connect

    Naqvi, S

    2014-06-15

    Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physical principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as

  16. Recent advances in ytterbium-based contrast agents for in vivo X-ray computed tomography imaging: promises and prospects.

    PubMed

    Liu, Yanlan; Liu, Jianhua; Ai, Kelong; Yuan, Qinghai; Lu, Lehui

    2014-01-01

    X-ray computed tomography (CT) imaging is one of the most widely used diagnostic imaging techniques in the clinic, and has raised significant interest in recent years both in research and practice owing to its many advantages such as deep penetration depth, high resolution and facile image processing. Developing heavy metal-based CT contrast agents, especially heavy metal-containing nanoparticulate CT contrast agents, has become a key focus in research fields to address issues of clinical iodinated agents involving short circulation time, low contrast efficiency and potential renal toxicity. In this review, we summarize the development of ytterbium (Yb)-based CT contrast agents and highlight the design and applications of Yb-based nanoparticulate CT contrast agents. Yb has high atomic number and higher abundance in the earth's crust relative to Au, Ta and Bi, which have received much attention as a CT contrast agents. In particular, in contrast to these metal elements, as well as I, Yb has K-edge energy that is located just within the higher-intensity region of X-ray spectra, which can induce significant enhancement in the contrast efficiency. When encapsulated in nanoparticles, Yb can remain in the circulation for a long time. This long in vivo circulation time, combined with the proper K-edge energy and a large absorption cross-section of Yb in the near-infrared region, makes Yb-based nanoparticles particularly promising in angiography, 'multicolor' spectral CT imaging, and multimodal imaging. Finally, we also discuss the prospects and the challenges in the development of Yb-based CT contrast agents.

  17. r.avaflow: An advanced open source computational framework for the GIS-based simulation of two-phase mass flows and process chains

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Fischer, Jan-Thomas; Fellin, Wolfgang; Ostermann, Alexander; Pudasaini, Shiva P.

    2015-04-01

    Geophysical mass flows stand for a broad range of processes and process chains such as flows and avalanches of snow, soil, debris or rock, and their interactions with water bodies resulting in flood waves. Despite considerable efforts put in model development, the simulation, and therefore the appropriate prediction of these types of events still remains a major challenge in terms of the complex material behaviour, strong phase interactions, process transformations and the complex mountain topography. Sophisticated theories exist, but they have hardly been brought to practice yet. We fill this gap by developing a novel and unified high-resolution computational tool, r.avaflow, representing a comprehensive and advanced open source GIS simulation environment for geophysical mass flows. Based on the latest and most advanced two-phase physical-mathematical models, r.avaflow includes the following features: (i) it is suitable for a broad spectrum of mass flows such as rock, rock-ice and snow avalanches, glacial lake outburst floods, debris and hyperconcentrated flows, and even landslide-induced tsunamis and submarine landslides, as well as process chains involving more than one of these phenomena; (ii) it accounts for the real two-phase nature of many flow types: viscous fluids and solid particles are considered separately with advanced mechanics and strong phase interactions; (iii) it is freely available and adoptable along with the GRASS GIS software. In the future, it will include the intrinsic topographic influences on the flow dynamics and morphology as well as an advanced approach to simulate the entrainment and deposition of solid and fluid material. As input r.avaflow needs information on (a) the mountain topography, (b) the material properties and (c) the spatial distribution of the solid and fluid release masses or one or more hydrographs of fluid and solid material. We demonstrate the functionalities and performance of r.avaflow by using some generic and real

  18. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    PubMed

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  19. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    PubMed

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  20. New computing systems and their impact on computational mechanics

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1989-01-01

    Recent advances in computer technology that are likely to impact computational mechanics are reviewed. The technical needs for computational mechanics technology are outlined. The major features of new and projected computing systems, including supersystems, parallel processing machines, special-purpose computing hardware, and small systems are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism on multiprocessor computers with a shared memory.