Sample records for performance computing centers

  1. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  2. High-Performance Computing Data Center Warm-Water Liquid Cooling |

    Science.gov Websites

    Computational Science | NREL Warm-Water Liquid Cooling High-Performance Computing Data Center Warm-Water Liquid Cooling NREL's High-Performance Computing Data Center (HPC Data Center) is liquid water Liquid cooling technologies offer a more energy-efficient solution that also allows for effective

  3. Computational Science News | Computational Science | NREL

    Science.gov Websites

    -Cooled High-Performance Computing Technology at the ESIF February 28, 2018 NREL Launches New Website for High-Performance Computing System Users The National Renewable Energy Laboratory (NREL) Computational Science Center has launched a revamped website for users of the lab's high-performance computing (HPC

  4. An investigation of the effects of touchpad location within a notebook computer.

    PubMed

    Kelaher, D; Nay, T; Lawrence, B; Lamar, S; Sommerich, C M

    2001-02-01

    This study evaluated effects of the location of a notebook computer's integrated touchpad, complimenting previous work in the area of desktop mouse location effects. Most often integrated touchpads are located in the computer's wrist rest, and centered on the keyboard. This study characterized effects of this bottom center location and four alternatives (top center, top right, right side, and bottom right) upon upper extremity posture, discomfort, preference, and performance. Touchpad location was found to significantly impact each of those measures. The top center location was particularly poor, in that it elicited more ulnar deviation, more shoulder flexion, more discomfort, and perceptions of performance impedance. In general, the bottom center, bottom right, and right side locations fared better, though subjects' wrists were more extended in the bottom locations. Suggestions for notebook computer design are provided.

  5. High performance computing for advanced modeling and simulation of materials

    NASA Astrophysics Data System (ADS)

    Wang, Jue; Gao, Fei; Vazquez-Poletti, Jose Luis; Li, Jianjiang

    2017-02-01

    The First International Workshop on High Performance Computing for Advanced Modeling and Simulation of Materials (HPCMS2015) was held in Austin, Texas, USA, Nov. 18, 2015. HPCMS 2015 was organized by Computer Network Information Center (Chinese Academy of Sciences), University of Michigan, Universidad Complutense de Madrid, University of Science and Technology Beijing, Pittsburgh Supercomputing Center, China Institute of Atomic Energy, and Ames Laboratory.

  6. A Queue Simulation Tool for a High Performance Scientific Computing Center

    NASA Technical Reports Server (NTRS)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  7. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  8. High Performance Computing Meets Energy Efficiency - Continuum Magazine |

    Science.gov Websites

    NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data

  9. Kevin Regimbal | NREL

    Science.gov Websites

    -275-4303 Kevin Regimbal oversees NREL's High Performance Computing (HPC) Systems & Operations , engineering, and operations. Kevin is interested in data center design and computing as well as data center integration and optimization. Professional Experience HPC oversight: program manager, project manager, center

  10. Roy Fraley | NREL

    Science.gov Websites

    Roy Fraley Roy Fraley Professional II-Engineer Roy.Fraley@nrel.gov | 303-384-6468 Roy Fraley is the high-performance computing (HPC) data center engineer with the Computational Science Center's HPC

  11. HPCCP/CAS Workshop Proceedings 1998

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine; Mata, Ellen (Editor); Schulbach, Catherine (Editor)

    1999-01-01

    This publication is a collection of extended abstracts of presentations given at the HPCCP/CAS (High Performance Computing and Communications Program/Computational Aerosciences Project) Workshop held on August 24-26, 1998, at NASA Ames Research Center, Moffett Field, California. The objective of the Workshop was to bring together the aerospace high performance computing community, consisting of airframe and propulsion companies, independent software vendors, university researchers, and government scientists and engineers. The Workshop was sponsored by the HPCCP Office at NASA Ames Research Center. The Workshop consisted of over 40 presentations, including an overview of NASA's High Performance Computing and Communications Program and the Computational Aerosciences Project; ten sessions of papers representative of the high performance computing research conducted within the Program by the aerospace industry, academia, NASA, and other government laboratories; two panel sessions; and a special presentation by Mr. James Bailey.

  12. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostadin, Damevski

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less

  13. About High-Performance Computing at NREL | High-Performance Computing |

    Science.gov Websites

    Day(s): First Thursday of every month Hours: 11 a.m. - 12 p.m. Location: ESIF B211-Edison Conference Room Contact: Jennifer Southerland Insight Center - Visualization Tools Day(s): Every Monday Hours: 10 Data System Day(s): Every Monday Hours: 10 a.m. - 11 a.m. Location: ESIF B308-Insight Center

  14. Current state and future direction of computer systems at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  15. Cloudbursting - Solving the 3-body problem

    NASA Astrophysics Data System (ADS)

    Chang, G.; Heistand, S.; Vakhnin, A.; Huang, T.; Zimdars, P.; Hua, H.; Hood, R.; Koenig, J.; Mehrotra, P.; Little, M. M.; Law, E.

    2014-12-01

    Many science projects in the future will be accomplished through collaboration among 2 or more NASA centers along with, potentially, external scientists. Science teams will be composed of more geographically dispersed individuals and groups. However, the current computing environment does not make this easy and seamless. By being able to share computing resources among members of a multi-center team working on a science/ engineering project, limited pre-competition funds could be more efficiently applied and technical work could be conducted more effectively with less time spent moving data or waiting for computing resources to free up. Based on the work from an NASA CIO IT Labs task, this presentation will highlight our prototype work in identifying the feasibility and identify the obstacles, both technical and management, to perform "Cloudbursting" among private clouds located at three different centers. We will demonstrate the use of private cloud computing infrastructure at the Jet Propulsion Laboratory, Langley Research Center, and Ames Research Center to provide elastic computation to each other to perform parallel Earth Science data imaging. We leverage elastic load balancing and auto-scaling features at each data center so that each location can independently define how many resources to allocate to a particular job that was "bursted" from another data center and demonstrate that compute capacity scales up and down with the job. We will also discuss future work in the area, which could include the use of cloud infrastructure from different cloud framework providers as well as other cloud service providers.

  16. Joint the Center for Applied Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, Todd; Bremer, Timo; Van Essen, Brian

    The Center for Applied Scientific Computing serves as Livermore Lab’s window to the broader computer science, computational physics, applied mathematics, and data science research communities. In collaboration with academic, industrial, and other government laboratory partners, we conduct world-class scientific research and development on problems critical to national security. CASC applies the power of high-performance computing and the efficiency of modern computational methods to the realms of stockpile stewardship, cyber and energy security, and knowledge discovery for intelligence applications.

  17. NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report

    NASA Technical Reports Server (NTRS)

    Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ

    2013-01-01

    The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities

  18. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented,more » high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.« less

  19. Optical interconnection networks for high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  20. Computer programs: Operational and mathematical, a compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.

  1. Vanderbilt University Institute of Imaging Science Center for Computational Imaging XNAT: A multimodal data archive and processing environment.

    PubMed

    Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A

    2016-01-01

    The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Energy 101: Energy Efficient Data Centers

    ScienceCinema

    None

    2018-04-16

    Data centers provide mission-critical computing functions vital to the daily operation of top U.S. economic, scientific, and technological organizations. These data centers consume large amounts of energy to run and maintain their computer systems, servers, and associated high-performance components—up to 3% of all U.S. electricity powers data centers. And as more information comes online, data centers will consume even more energy. Data centers can become more energy efficient by incorporating features like power-saving "stand-by" modes, energy monitoring software, and efficient cooling systems instead of energy-intensive air conditioners. These and other efficiency improvements to data centers can produce significant energy savings, reduce the load on the electric grid, and help protect the nation by increasing the reliability of critical computer operations.

  3. Staff | Computational Science | NREL

    Science.gov Websites

    develops and leads laboratory-wide efforts in high-performance computing and energy-efficient data centers Professional IV-High Perf Computing Jim.Albin@nrel.gov 303-275-4069 Ananthan, Shreyas Senior Scientist - High -Performance Algorithms and Modeling Shreyas.Ananthan@nrel.gov 303-275-4807 Bendl, Kurt IT Professional IV-High

  4. Expanding the Scope of High-Performance Computing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uram, Thomas D.; Papka, Michael E.

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  5. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  6. Expanding HPC and Research Computing--The Sustainable Way

    ERIC Educational Resources Information Center

    Grush, Mary

    2009-01-01

    Increased demands for research and high-performance computing (HPC)--along with growing expectations for cost and environmental savings--are putting new strains on the campus data center. More and more, CIOs like the University of Notre Dame's (Indiana) Gordon Wishon are seeking creative ways to build more sustainable models for data center and…

  7. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    DTIC Science & Technology

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  8. The role of dedicated data computing centers in the age of cloud computing

    NASA Astrophysics Data System (ADS)

    Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2017-10-01

    Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.

  9. Hypothesis generation using network structures on community health center cancer-screening performance.

    PubMed

    Carney, Timothy Jay; Morgan, Geoffrey P; Jones, Josette; McDaniel, Anna M; Weaver, Michael T; Weiner, Bryan; Haggstrom, David A

    2015-10-01

    Nationally sponsored cancer-care quality-improvement efforts have been deployed in community health centers to increase breast, cervical, and colorectal cancer-screening rates among vulnerable populations. Despite several immediate and short-term gains, screening rates remain below national benchmark objectives. Overall improvement has been both difficult to sustain over time in some organizational settings and/or challenging to diffuse to other settings as repeatable best practices. Reasons for this include facility-level changes, which typically occur in dynamic organizational environments that are complex, adaptive, and unpredictable. This study seeks to understand the factors that shape community health center facility-level cancer-screening performance over time. This study applies a computational-modeling approach, combining principles of health-services research, health informatics, network theory, and systems science. To investigate the roles of knowledge acquisition, retention, and sharing within the setting of the community health center and to examine their effects on the relationship between clinical decision support capabilities and improvement in cancer-screening rate improvement, we employed Construct-TM to create simulated community health centers using previously collected point-in-time survey data. Construct-TM is a multi-agent model of network evolution. Because social, knowledge, and belief networks co-evolve, groups and organizations are treated as complex systems to capture the variability of human and organizational factors. In Construct-TM, individuals and groups interact by communicating, learning, and making decisions in a continuous cycle. Data from the survey was used to differentiate high-performing simulated community health centers from low-performing ones based on computer-based decision support usage and self-reported cancer-screening improvement. This virtual experiment revealed that patterns of overall network symmetry, agent cohesion, and connectedness varied by community health center performance level. Visual assessment of both the agent-to-agent knowledge sharing network and agent-to-resource knowledge use network diagrams demonstrated that community health centers labeled as high performers typically showed higher levels of collaboration and cohesiveness among agent classes, faster knowledge-absorption rates, and fewer agents that were unconnected to key knowledge resources. Conclusions and research implications: Using the point-in-time survey data outlining community health center cancer-screening practices, our computational model successfully distinguished between high and low performers. Results indicated that high-performance environments displayed distinctive network characteristics in patterns of interaction among agents, as well as in the access and utilization of key knowledge resources. Our study demonstrated how non-network-specific data obtained from a point-in-time survey can be employed to forecast community health center performance over time, thereby enhancing the sustainability of long-term strategic-improvement efforts. Our results revealed a strategic profile for community health center cancer-screening improvement via simulation over a projected 10-year period. The use of computational modeling allows additional inferential knowledge to be drawn from existing data when examining organizational performance in increasingly complex environments. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Computer-Assisted Performance Evaluation for Navy Anti-Air Warfare Training: Concepts, Methods, and Constraints.

    ERIC Educational Resources Information Center

    Chesler, David J.

    An improved general methodological approach for the development of computer-assisted evaluation of trainee performance in the computer-based simulation environment is formulated in this report. The report focuses on the Tactical Advanced Combat Direction and Electronic Warfare system (TACDEW) at the Fleet Anti-Air Warfare Training Center at San…

  11. Evaluation of Rankine cycle air conditioning system hardware by computer simulation

    NASA Technical Reports Server (NTRS)

    Healey, H. M.; Clark, D.

    1978-01-01

    A computer program for simulating the performance of a variety of solar powered Rankine cycle air conditioning system components (RCACS) has been developed. The computer program models actual equipment by developing performance maps from manufacturers data and is capable of simulating off-design operation of the RCACS components. The program designed to be a subroutine of the Marshall Space Flight Center (MSFC) Solar Energy System Analysis Computer Program 'SOLRAD', is a complete package suitable for use by an occasional computer user in developing performance maps of heating, ventilation and air conditioning components.

  12. The Kepler Science Data Processing Pipeline Source Code Road Map

    NASA Technical Reports Server (NTRS)

    Wohler, Bill; Jenkins, Jon M.; Twicken, Joseph D.; Bryson, Stephen T.; Clarke, Bruce Donald; Middour, Christopher K.; Quintana, Elisa Victoria; Sanderfer, Jesse Thomas; Uddin, Akm Kamal; Sabale, Anima; hide

    2016-01-01

    We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed, developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center, the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center which hosts the computers required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Processing Pipeline, including, the software algorithms. We present the high-performance, parallel computing software modules of the pipeline that perform transit photometry, pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument characterization.

  13. A resource-sharing model based on a repeated game in fog computing.

    PubMed

    Sun, Yan; Zhang, Nan

    2017-03-01

    With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  14. Webinar: Delivering Transformational HPC Solutions to Industry

    ScienceCinema

    Streitz, Frederick

    2018-01-16

    Dr. Frederick Streitz, director of the High Performance Computing Innovation Center, discusses Lawrence Livermore National Laboratory computational capabilities and expertise available to industry in this webinar.

  15. Initial Comparison of Single Cylinder Stirling Engine Computer Model Predictions with Test Results

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.; Thieme, L. G.; Miao, D.

    1979-01-01

    A Stirling engine digital computer model developed at NASA Lewis Research Center was configured to predict the performance of the GPU-3 single-cylinder rhombic drive engine. Revisions to the basic equations and assumptions are discussed. Model predictions with the early results of the Lewis Research Center GPU-3 tests are compared.

  16. Investigating Impact Metrics for Performance for the US EPA National Center for Computational Toxicology (ACS Fall meeting)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data drive...

  17. Tools for 3D scientific visualization in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.

  18. Argonne Out Loud: Computation, Big Data, and the Future of Cities

    ScienceCinema

    Catlett, Charlie

    2018-01-16

    Charlie Catlett, a Senior Computer Scientist at Argonne and Director of the Urban Center for Computation and Data at the Computation Institute of the University of Chicago and Argonne, talks about how he and his colleagues are using high-performance computing, data analytics, and embedded systems to better understand and design cities.

  19. NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)

    ScienceCinema

    None

    2018-02-07

    NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.

  20. NETL - Supercomputing: NETL Simulation Based Engineering User Center (SBEUC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-09-30

    NETL's Simulation-Based Engineering User Center, or SBEUC, integrates one of the world's largest high-performance computers with an advanced visualization center. The SBEUC offers a collaborative environment among researchers at NETL sites and those working through the NETL-Regional University Alliance.

  1. Evolving Storage and Cyber Infrastructure at the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen; Duffy, Daniel; Spear, Carrie; Sinno, Scott; Vaughan, Garrison; Bowen, Michael

    2018-01-01

    This talk will describe recent developments at the NASA Center for Climate Simulation, which is funded by NASAs Science Mission Directorate, and supports the specialized data storage and computational needs of weather, ocean, and climate researchers, as well as astrophysicists, heliophysicists, and planetary scientists. To meet requirements for higher-resolution, higher-fidelity simulations, the NCCS augments its High Performance Computing (HPC) and storage retrieval environment. As the petabytes of model and observational data grow, the NCCS is broadening data services offerings and deploying and expanding virtualization resources for high performance analytics.

  2. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  3. InfoMall: An Innovative Strategy for High-Performance Computing and Communications Applications Development.

    ERIC Educational Resources Information Center

    Mills, Kim; Fox, Geoffrey

    1994-01-01

    Describes the InfoMall, a program led by the Northeast Parallel Architectures Center (NPAC) at Syracuse University (New York). The InfoMall features a partnership of approximately 24 organizations offering linked programs in High Performance Computing and Communications (HPCC) technology integration, software development, marketing, education and…

  4. A cross-sectional study of the effects of load carriage on running characteristics and tibial mechanical stress: implications for stress fracture injuries in women

    DTIC Science & Technology

    2017-03-23

    performance computing resources made available by the US Department of Defense High Performance Computing Modernization Program at the Air Force...1Department of Defense Biotechnology High Performance Computing Software Applications Institute, Telemedicine and Advanced Technology Research Center, United...States Army Medical Research and Materiel Command, Fort Detrick, Maryland, USA Full list of author information is available at the end of the article

  5. SANs and Large Scale Data Migration at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2004-01-01

    Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.

  6. Developing computer training programs for blood bankers.

    PubMed

    Eisenbrey, L

    1992-01-01

    Two surveys were conducted in July 1991 to gather information about computer training currently performed within American Red Cross Blood Services Regions. One survey was completed by computer trainers from software developer-vendors and regional centers. The second survey was directed to the trainees, to determine their perception of the computer training. The surveys identified the major concepts, length of training, evaluations, and methods of instruction used. Strengths and weaknesses of training programs were highlighted by trainee respondents. Using the survey information and other sources, recommendations (including those concerning which computer skills and tasks should be covered) are made that can be used as guidelines for developing comprehensive computer training programs at any blood bank or blood center.

  7. Kepler Science Operations Center Architecture

    NASA Technical Reports Server (NTRS)

    Middour, Christopher; Klaus, Todd; Jenkins, Jon; Pletcher, David; Cote, Miles; Chandrasekaran, Hema; Wohler, Bill; Girouard, Forrest; Gunter, Jay P.; Uddin, Kamal; hide

    2010-01-01

    We give an overview of the operational concepts and architecture of the Kepler Science Data Pipeline. Designed, developed, operated, and maintained by the Science Operations Center (SOC) at NASA Ames Research Center, the Kepler Science Data Pipeline is central element of the Kepler Ground Data System. The SOC charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Data Pipeline, including the hardware infrastructure, scientific algorithms, and operational procedures. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center that hosts the computers required to perform data analysis. We discuss the high-performance, parallel computing software modules of the Kepler Science Data Pipeline that perform transit photometry, pixel-level calibration, systematic error-correction, attitude determination, stellar target management, and instrument characterization. We explain how data processing environments are divided to support operational processing and test needs. We explain the operational timelines for data processing and the data constructs that flow into the Kepler Science Data Pipeline.

  8. Computational Nanotechnology Molecular Electronics, Materials and Machines

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    This presentation covers research being performed on computational nanotechnology, carbon nanotubes and fullerenes at the NASA Ames Research Center. Topics cover include: nanomechanics of nanomaterials, nanotubes and composite materials, molecular electronics with nanotube junctions, kinky chemistry, and nanotechnology for solid-state quantum computers using fullerenes.

  9. Performance assessment of KORAT-3D on the ANL IBM-SP computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.

    1999-09-01

    The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less

  10. High-Performance Computing Data Center Power Usage Effectiveness |

    Science.gov Websites

    Power Usage Effectiveness When the Energy Systems Integration Facility (ESIF) was conceived, NREL set an , ventilation, and air conditioning (HVAC), which captures fan walls, fan coils that support the data center

  11. GSDC: A Unique Data Center in Korea for HEP research

    NASA Astrophysics Data System (ADS)

    Ahn, Sang-Un

    2017-04-01

    Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) is a unique data center in South Korea established for promoting the fundamental research fields by supporting them with the expertise on Information and Communication Technology (ICT) and the infrastructure for High Performance Computing (HPC), High Throughput Computing (HTC) and Networking. GSDC has supported various research fields in South Korea dealing with the large scale of data, e.g. RENO experiment for neutrino research, LIGO experiment for gravitational wave detection, Genome sequencing project for bio-medical, and HEP experiments such as CDF at FNAL, Belle at KEK, and STAR at BNL. In particular, GSDC has run a Tier-1 center for ALICE experiment using the LHC at CERN since 2013. In this talk, we present the overview on computing infrastructure that GSDC runs for the research fields and we discuss on the data center infrastructure management system deployed at GSDC.

  12. Computer Technology and Academic Skill Training for Improving Disabled Students' Academic Performance: Applications and Limitations.

    ERIC Educational Resources Information Center

    Severs, Mary K.

    The Educational Center for Disabled Students at the University of Nebraska-Lincoln is designed to improve the academic performance and attitudes toward success of disabled students through computer technology and academic skills training. Adaptive equipment interventions take into account keyboard access and screen and voice output. Non-adaptive…

  13. System Analysis for the Huntsville Operation Support Center, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Massey, D.

    1985-01-01

    HOSC as a distributed computing system, is responsible for data acquisition and analysis during Space Shuttle operations. HOSC also provides computing services for Marshall Space Flight Center's nonmission activities. As mission and nonmission activities change, so do the support functions of HOSC change, demonstrating the need for some method of simulating activity at HOSC in various configurations. The simulation developed in this work primarily models the HYPERchannel network. The model simulates the activity of a steady state network, reporting statistics such as, transmitted bits, collision statistics, frame sequences transmitted, and average message delay. These statistics are used to evaluate such performance indicators as throughout, utilization, and delay. Thus the overall performance of the network is evaluated, as well as predicting possible overload conditions.

  14. Proceedings: Computer Science and Data Systems Technical Symposium, volume 1

    NASA Technical Reports Server (NTRS)

    Larsen, Ronald L.; Wallgren, Kenneth

    1985-01-01

    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form are included for topics in three categories: computer science, data systems and space station applications.

  15. High-Performance Computing Data Center Efficiency Dashboard | Computational

    Science.gov Websites

    recovery water (ERW) loop Heat exchanger for energy recovery Thermosyphon Heat exchanger between ERW loop and cooling tower loop Evaporative cooling towers Learn more about our energy-efficient facility

  16. 3D Object Recognition: Symmetry and Virtual Views

    DTIC Science & Technology

    1992-12-01

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial

  17. Exploring the role of pendant amines in transition metal complexes for the reduction of N2 to hydrazine and ammonia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Papri; Prokopchuk, Demyan E.; Mock, Michael T.

    2017-03-01

    This review examines the synthesis and acid reactivity of transition metal dinitrogen complexes bearing diphosphine ligands containing pendant amine groups in the second coordination sphere. This manuscript is a review of the work performed in the Center for Molecular Electrocatalysis. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences. EPR studies on Fe were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located atmore » PNNL. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific Northwest National Laboratory is operated by Battelle for the U.S. DOE.« less

  18. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    PubMed

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  19. Administration of Computer Resources.

    ERIC Educational Resources Information Center

    Franklin, Gene F.

    Computing at Stanford University has, until recently, been performed at one of five facilities. The Stanford hospital operates an IBM 370/135 mainly for administrative use. The university business office has an IBM 370/145 for its administrative needs and support of the medical clinic. Under the supervision of the Stanford Computation Center are…

  20. Cybersecurity: Utilizing Fusion Centers to Protect State, Local, Tribal, and Territorial Entities Against Cyber Threats

    DTIC Science & Technology

    2016-09-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9...state- and local-level computer networks fertile ground for the cyber adversary. This research focuses on the threat to SLTT computer networks and how...institutions, and banking systems. The array of responsibilities and the cybersecurity threat landscape make state- and local-level computer networks fertile

  1. A 20-year period of orthotopic liver transplantation activity in a single center: a time series analysis performed using the R Statistical Software.

    PubMed

    Santori, G; Andorno, E; Morelli, N; Casaccia, M; Bottino, G; Di Domenico, S; Valente, U

    2009-05-01

    In many Western countries a "minimum volume rule" policy has been adopted as a quality measure for complex surgical procedures. In Italy, the National Transplant Centre set the minimum number of orthotopic liver transplantation (OLT) procedures/y at 25/center. OLT procedures performed in a single center for a reasonably large period may be treated as a time series to evaluate trend, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1987 and December 31, 2006, we performed 563 cadaveric donor OLTs to adult recipients. During 2007, there were another 28 procedures. The greatest numbers of OLTs/y were performed in 2001 (n = 51), 2005 (n = 50), and 2004 (n = 49). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed an incremental trend after exponential smoothing as well as after seasonal decomposition. The predicted OLT/mo for 2007 calculated with the Holt-Winters exponential smoothing applied to the previous period 1987-2006 helped to identify the months where there was a major difference between predicted and performed procedures. The time series approach may be helpful to establish a minimum volume/y at a single-center level.

  2. NREL Evaluates Aquarius Liquid-Cooled High-Performance Computing Technology

    Science.gov Websites

    HPC and influence the modern data center designer towards adoption of liquid cooling. Our shared technology. Aquila and Sandia chose NREL's HPC Data Center for the initial installation and evaluation because the data center is configured for liquid cooling, along with the required instrumentation to

  3. Aerodynamic Characterization of a Modern Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Holland, Scott D.; Blevins, John A.

    2011-01-01

    A modern launch vehicle is by necessity an extremely integrated design. The accurate characterization of its aerodynamic characteristics is essential to determine design loads, to design flight control laws, and to establish performance. The NASA Ares Aerodynamics Panel has been responsible for technical planning, execution, and vetting of the aerodynamic characterization of the Ares I vehicle. An aerodynamics team supporting the Panel consists of wind tunnel engineers, computational engineers, database engineers, and other analysts that address topics such as uncertainty quantification. The team resides at three NASA centers: Langley Research Center, Marshall Space Flight Center, and Ames Research Center. The Panel has developed strategies to synergistically combine both the wind tunnel efforts and the computational efforts with the goal of validating the computations. Selected examples highlight key flow physics and, where possible, the fidelity of the comparisons between wind tunnel results and the computations. Lessons learned summarize what has been gleaned during the project and can be useful for other vehicle development projects.

  4. Polymer waveguides for electro-optical integration in data centers and high-performance computers.

    PubMed

    Dangel, Roger; Hofrichter, Jens; Horst, Folkert; Jubin, Daniel; La Porta, Antonio; Meier, Norbert; Soganci, Ibrahim Murat; Weiss, Jonas; Offrein, Bert Jan

    2015-02-23

    To satisfy the intra- and inter-system bandwidth requirements of future data centers and high-performance computers, low-cost low-power high-throughput optical interconnects will become a key enabling technology. To tightly integrate optics with the computing hardware, particularly in the context of CMOS-compatible silicon photonics, optical printed circuit boards using polymer waveguides are considered as a formidable platform. IBM Research has already demonstrated the essential silicon photonics and interconnection building blocks. A remaining challenge is electro-optical packaging, i.e., the connection of the silicon photonics chips with the system. In this paper, we present a new single-mode polymer waveguide technology and a scalable method for building the optical interface between silicon photonics chips and single-mode polymer waveguides.

  5. Proceedings: Computer Science and Data Systems Technical Symposium, volume 2

    NASA Technical Reports Server (NTRS)

    Larsen, Ronald L.; Wallgren, Kenneth

    1985-01-01

    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form, along with abstracts, are included for topics in three catagories: computer science, data systems, and space station applications.

  6. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

  7. A parallel-processing approach to computing for the geographic sciences; applications and systems enhancements

    USGS Publications Warehouse

    Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Liu, Shu-Guang; Nichols, Erin; Haga, Jim; Maddox, Brian; Bilderback, Chris; Feller, Mark; Homer, George

    2001-01-01

    The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost, personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting information science research into parallel computing systems and applications.

  8. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  9. High-Performance Computing Data Center | Computational Science | NREL

    Science.gov Websites

    liquid cooling to achieve its very low PUE, then captures and reuses waste heat as the primary heating dry cooler that uses refrigerant in a passive cycle to dissipate heat-is reducing onsite water Measuring efficiency through PUE Warm-water liquid cooling Re-using waste heat from computing components

  10. For operation of the Computer Software Management and Information Center (COSMIC)

    NASA Technical Reports Server (NTRS)

    Carmon, J. L.

    1983-01-01

    Computer programs for large systems of normal equations, an interactive digital signal process, structural analysis of cylindrical thrust chambers, swirling turbulent axisymmetric recirculating flows in practical isothermal combustor geometrics, computation of three dimensional combustor performance, a thermal radiation analysis system, transient response analysis, and a software design analysis are summarized.

  11. Center for Advanced Computational Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  12. Storage and network bandwidth requirements through the year 2000 for the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen

    1996-01-01

    The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.

  13. A Fluid Structure Algorithm with Lagrange Multipliers to Model Free Swimming

    NASA Astrophysics Data System (ADS)

    Sahin, Mehmet; Dilek, Ezgi

    2017-11-01

    A new monolithic approach is prosed to solve the fluid-structure interaction (FSI) problem with Lagrange multipliers in order to model free swimming/flying. In the present approach, the fluid domain is modeled by the incompressible Navier-Stokes equations and discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the stable side-centered unstructured finite volume method. The solid domain is modeled by the constitutive laws for the nonlinear Saint Venant-Kirchhoff material and the classical Galerkin finite element method is used to discretize the governing equations in a Lagrangian frame. In order to impose the body motion/deformation, the distance between the constraint pair nodes is imposed using the Lagrange multipliers, which is independent from the frame of reference. The resulting algebraic linear equations are solved in a fully coupled manner using a dual approach (null space method). The present numerical algorithm is initially validated for the classical FSI benchmark problems and then applied to the free swimming of three linked ellipses. The authors are grateful for the use of the computing resources provided by the National Center for High Performance Computing (UYBHM) under Grant Number 10752009 and the computing facilities at TUBITAK-ULAKBIM, High Performance and Grid Computing Center.

  14. The Impact of Wireless Technology on Order Selection Audits at an Auto Parts Distribution Center

    ERIC Educational Resources Information Center

    Goomas, David T.

    2012-01-01

    Audits of store order pallets or totes performed by auditors at five distribution centers (two experimental and three comparison distribution centers) were used to check for picking accuracy prior to being loaded onto a truck for store delivery. Replacing the paper audits with wireless handheld computers that included immediate auditory and visual…

  15. A Survey of Management Tasks Performed by Day Care Center Directors.

    ERIC Educational Resources Information Center

    Dent, Barbara

    The general problem addressed in this survey is the identification of the management training needs of day care center directors. A questionnaire was developed and mailed to 102 directors of full time, pre-school day care centers in Baltimore City. The directors' answers were tabulated and simple percentages were computed. Directors were asked to…

  16. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    PubMed Central

    2012-01-01

    Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423

  17. Dynamism in Electronic Performance Support Systems.

    ERIC Educational Resources Information Center

    Laffey, James

    1995-01-01

    Describes a model for dynamic electronic performance support systems based on NNAble, a system developed by the training group at Apple Computer. Principles for designing dynamic performance support are discussed, including a systems approach, performer-centered design, awareness of situated cognition, organizational memory, and technology use.…

  18. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  19. A time series analysis performed on a 25-year period of kidney transplantation activity in a single center.

    PubMed

    Santori, G; Fontana, I; Bertocchi, M; Gasloli, G; Valente, U

    2010-05-01

    Following the example of many Western countries, where a "minimum volume rule" policy has been adopted as a quality parameter for complex surgical procedures, the Italian National Transplant Centre set the minimum number of kidney transplantation procedures/y at 30/center. The number of procedures performed in a single center over a large period may be treated as a time series to evaluate trends, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1983, and December 31, 2007, we performed 1376 procedures in adult or pediatric recipients from living or cadaveric donors. The greatest numbers of cases/y were performed in 1998 (n = 86) followed by 2004 (n = 82), 1996 (n = 75), and 2003 (n = 73). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed a whole incremental trend after exponential smoothing as well as after seasonal decomposition. However, starting from 2005, we observed a decreased trend in the series. The number of kidney transplants expected to be performed for 2008 by using the Holt-Winters exponential smoothing applied to the period 1983 to 2007 suggested 58 procedures, while in that year there were 52. The time series approach may be helpful to establish a minimum volume/y at a single-center level. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  20. High Performance Computing (HPC) Innovation Service Portal Pilots Cloud Computing (HPC-ISP Pilot Cloud Computing)

    DTIC Science & Technology

    2011-08-01

    5 Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis...classification of streaming data. Example input images (top left). All digit prototypes (cluster centers) found, with size proportional to frequency (top...Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis 1 http

  1. User Centered System Design: Papers for the CHI '83 Conference on Human Factors in Computer Systems.

    ERIC Educational Resources Information Center

    California Univ., San Diego. Center for Human Information Processing.

    Four papers from the University of California at San Diego (UCSD) Project on Human-Computer Interfaces are presented in this report. "Evaluation and Analysis of User's Activity Organization," by Liam Bannon, Allen Cypher, Steven Greenspan, and Melissa Monty, analyzes the activities performed by users of computer systems, develops a…

  2. The Center for Computational Biology: resources, achievements, and challenges

    PubMed Central

    Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2011-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221

  3. The Center for Computational Biology: resources, achievements, and challenges.

    PubMed

    Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2012-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.

  4. Use of a Food and Drug Administration-Approved Type 1 Diabetes Mellitus Simulator to Evaluate and Optimize a Proportional-Integral-Derivative Controller

    DTIC Science & Technology

    2012-11-01

    performance . The simulations confirm that the PID algorithm can be applied to this cohort without the risk of hypoglycemia . Funding: The study was... Performance Computing Software Applications Institute, Telemedicine and Advanced Technology Research Center, U.S. Army Medical Research and Materiel Command...safe operating region, type 1 diabetes mellitus simulator Corresponding Author: Jaques Reifman, Ph.D., DoD Biotechnology High- Performance Computing

  5. A Computer Program Functional Design of the Simulation Subsystem of an Automated Central Flow Control System

    DOT National Transportation Integrated Search

    1976-08-01

    This report contains a functional design for the simulation of a future automation concept in support of the ATC Systems Command Center. The simulation subsystem performs airport airborne arrival delay predictions and computes flow control tables for...

  6. A Multi-center Milestone Study of Clinical Vertebral CT Segmentation

    PubMed Central

    Yao, Jianhua; Burns, Joseph E.; Forsberg, Daniel; Seitel, Alexander; Rasoulian, Abtin; Abolmaesumi, Purang; Hammernik, Kerstin; Urschler, Martin; Ibragimov, Bulat; Korez, Robert; Vrtovec, Tomaž; Castro-Mateos, Isaac; Pozo, Jose M.; Frangi, Alejandro F.; Summers, Ronald M.; Li, Shuo

    2017-01-01

    A multiple center milestone study of clinical vertebra segmentation is presented in this paper. Vertebra segmentation is a fundamental step for spinal image analysis and intervention. The first half of the study was conducted in the spine segmentation challenge in 2014 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) Workshop on Computational Spine Imaging (CSI 2014). The objective was to evaluate the performance of several state-of-the-art vertebra segmentation algorithms on computed tomography (CT) scans using ten training and five testing dataset, all healthy cases; the second half of the study was conducted after the challenge, where additional 5 abnormal cases are used for testing to evaluate the performance under abnormal cases. Dice coefficients and absolute surface distances were used as evaluation metrics. Segmentation of each vertebra as a single geometric unit, as well as separate segmentation of vertebra substructures, was evaluated. Five teams participated in the comparative study. The top performers in the study achieved Dice coefficient of 0.93 in the upper thoracic, 0.95 in the lower thoracic and 0.96 in the lumbar spine for healthy cases, and 0.88 in the upper thoracic, 0.89 in the lower thoracic and 0.92 in the lumbar spine for osteoporotic and fractured cases. The strengths and weaknesses of each method as well as future suggestion for improvement are discussed. This is the first multi-center comparative study for vertebra segmentation methods, which will provide an up-to-date performance milestone for the fast growing spinal image analysis and intervention. PMID:26878138

  7. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  8. Performance Assessment Institute-NV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lombardo, Joesph

    2012-12-31

    The National Supercomputing Center for Energy and the Environment’s intention is to purchase a multi-purpose computer cluster in support of the Performance Assessment Institute (PA Institute). The PA Institute will serve as a research consortium located in Las Vegas Nevada with membership that includes: national laboratories, universities, industry partners, and domestic and international governments. This center will provide a one-of-a-kind centralized facility for the accumulation of information for use by Institutions of Higher Learning, the U.S. Government, and Regulatory Agencies and approved users. This initiative will enhance and extend High Performance Computing (HPC) resources in Nevada to support critical nationalmore » and international needs in "scientific confirmation". The PA Institute will be promoted as the leading Modeling, Learning and Research Center worldwide. The program proposes to utilize the existing supercomputing capabilities and alliances of the University of Nevada Las Vegas as a base, and to extend these resource and capabilities through a collaborative relationship with its membership. The PA Institute will provide an academic setting for interactive sharing, learning, mentoring and monitoring of multi-disciplinary performance assessment and performance confirmation information. The role of the PA Institute is to facilitate research, knowledge-increase, and knowledge-sharing among users.« less

  9. Performance optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.

    1991-01-01

    As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.

  10. Desktop Computing Integration Project

    NASA Technical Reports Server (NTRS)

    Tureman, Robert L., Jr.

    1992-01-01

    The Desktop Computing Integration Project for the Human Resources Management Division (HRMD) of LaRC was designed to help division personnel use personal computing resources to perform job tasks. The three goals of the project were to involve HRMD personnel in desktop computing, link mainframe data to desktop capabilities, and to estimate training needs for the division. The project resulted in increased usage of personal computers by Awards specialists, an increased awareness of LaRC resources to help perform tasks, and personal computer output that was used in presentation of information to center personnel. In addition, the necessary skills for HRMD personal computer users were identified. The Awards Office was chosen for the project because of the consistency of their data requests and the desire of employees in that area to use the personal computer.

  11. High Performance Computing Software Applications for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  12. ERDC MSRC (Major Shared Resource Center) Resource. High Performance Computing for the Warfighter. Fall 2008

    DTIC Science & Technology

    2008-01-01

    32 “Solving the Hard Problems” at UGC 2008 in Seattle By Rose J. Dykes, ERDC MSRC...two fields to remain competitive in the global market . The ERDC MSRC attempts to take every available opportunity to encourage students to enter these...Attendees of the 18th annual DoD High Performance Computing Mod ern- ization Pr ogram (HPCMP) Users Gr oup Confer ence ( UGC

  13. CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.

    PubMed

    Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan

    2017-06-24

    The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    East, D. R.; Sexton, J.

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and IBM TJ Watson Research Center to research, assess feasibility and develop an implementation plan for a High Performance Computing Innovation Center (HPCIC) in the Livermore Valley Open Campus (LVOC). The ultimate goal of this work was to help advance the State of California and U.S. commercial competitiveness in the arena of High Performance Computing (HPC) by accelerating the adoption of computational science solutions, consistent with recent DOE strategy directives. The desired result of this CRADA was a well-researched,more » carefully analyzed market evaluation that would identify those firms in core sectors of the US economy seeking to adopt or expand their use of HPC to become more competitive globally, and to define how those firms could be helped by the HPCIC with IBM as an integral partner.« less

  15. Applied Computational Fluid Dynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  16. CSM Testbed Development and Large-Scale Structural Applications

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Gillian, R. E.; Mccleary, Susan L.; Lotts, C. G.; Poole, E. L.; Overman, A. L.; Macy, S. C.

    1989-01-01

    A research activity called Computational Structural Mechanics (CSM) conducted at the NASA Langley Research Center is described. This activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM Testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.

  17. CESDIS

    NASA Technical Reports Server (NTRS)

    1994-01-01

    CESDIS, the Center of Excellence in Space Data and Information Sciences was developed jointly by NASA, Universities Space Research Association (USRA), and the University of Maryland in 1988 to focus on the design of advanced computing techniques and data systems to support NASA Earth and space science research programs. CESDIS is operated by USRA under contract to NASA. The Director, Associate Director, Staff Scientists, and administrative staff are located on-site at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The primary CESDIS mission is to increase the connection between computer science and engineering research programs at colleges and universities and NASA groups working with computer applications in Earth and space science. Research areas of primary interest at CESDIS include: 1) High performance computing, especially software design and performance evaluation for massively parallel machines; 2) Parallel input/output and data storage systems for high performance parallel computers; 3) Data base and intelligent data management systems for parallel computers; 4) Image processing; 5) Digital libraries; and 6) Data compression. CESDIS funds multiyear projects at U. S. universities and colleges. Proposals are accepted in response to calls for proposals and are selected on the basis of peer reviews. Funds are provided to support faculty and graduate students working at their home institutions. Project personnel visit Goddard during academic recess periods to attend workshops, present seminars, and collaborate with NASA scientists on research projects. Additionally, CESDIS takes on specific research tasks of shorter duration for computer science research requested by NASA Goddard scientists.

  18. Potential of Spark Ignition Engine, Effect of Vehicle Design Variables on Top Speed, Performance, and Fuel Economy

    DOT National Transportation Integrated Search

    1980-03-01

    The purpose of this report is to evaluate the effect of vehicle characteristics on vehicle performance and fuel economy. The studies were performed using the VEHSIM (vehicle simulation) program at the Transportation Systems Center. The computer simul...

  19. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  20. A complex valued radial basis function network for equalization of fast time varying channels.

    PubMed

    Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R

    1999-01-01

    This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.

  1. CSI computer system/remote interface unit acceptance test results

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.

    1992-01-01

    The validation tests conducted on the Control/Structures Interaction (CSI) Computer System (CCS)/Remote Interface Unit (RIU) is discussed. The CCS/RIU consists of a commercially available, Langley Research Center (LaRC) programmed, space flight qualified computer and a flight data acquisition and filtering computer, developed at LaRC. The tests were performed in the Space Structures Research Laboratory (SSRL) and included open loop excitation, closed loop control, safing, RIU digital filtering, and RIU stand alone testing with the CSI Evolutionary Model (CEM) Phase-0 testbed. The test results indicated that the CCS/RIU system is comparable to ground based systems in performing real-time control-structure experiments.

  2. RIACS

    NASA Technical Reports Server (NTRS)

    Moore, Robert C.

    1998-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: (1) Automated Reasoning. (2) Human-Centered Computing. and (3) High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling.

  3. Air Defense: A Computer Game for Research in Human Performance.

    DTIC Science & Technology

    1981-07-01

    warfare (ANW) threat analysis. M’ajor elements of the threat analysis problem \\\\,erc eoibedded in an interactive air detoense game controlled by a...The game requires sustained attention to a complex and interactive "hostile" environment, provides proper experimental control of relevant variables...AD-A102 725 NAVY PERSONNEL RESEARCH AND DEVELOPMENT CENTER SAN DETC F/6 5/10 AIR DEFENSE: A COMPUTER GAME FOR RESEARCH IN HUMAN PERFORMANCE.(U) JUL

  4. Fluid-Structure Interaction Using Retarded Potential and ABAQUS

    DTIC Science & Technology

    1992-08-19

    APPLICATION A retarded potential (RP) capability has been coupled to the ABAQUS program, through the DLOAD user written subroutine , to form ABAQUS - RP...and ABAQUS C. T. DYKA Geo-Centers, Inc. Fort Washington, MD 20744 and M. A. TAMM Computer Operations and Communications Branch Research Computation... ABAQUS 63569N 6. AUTHOR(S) 6604 C. T. Dyka* and M. A. Tamm 7. PERFORMING ORGANIZATION NAME(S) AND AOORESS(ES) b. PERFORMING ORGANIZATION REPORT NUMBER

  5. System and method for transferring telemetry data between a ground station and a control center

    NASA Technical Reports Server (NTRS)

    Ray, Timothy J. (Inventor); Ly, Vuong T. (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for coordinating communications between a ground station, a control center, and a spacecraft. The method receives a call to a simple, unified application programmer interface implementing communications protocols related to outer space, when instruction relates to receiving a command at the control center for the ground station generate an abstract message by agreeing upon a format for each type of abstract message with the ground station and using a set of message definitions to configure the command in the agreed upon format, encode the abstract message to generate an encoded message, and transfer the encoded message to the ground station, and perform similar actions when the instruction relates to receiving a second command as a second encoded message at the ground station from the control center and when the determined instruction type relates to transmitting information to the control center.

  6. The CP-PACS parallel computer

    NASA Astrophysics Data System (ADS)

    Ukawa, Akira

    1998-05-01

    The CP-PACS computer is a massively parallel computer consisting of 2048 processing units and having a peak speed of 614 GFLOPS and 128 GByte of main memory. It was developed over the four years from 1992 to 1996 at the Center for Computational Physics, University of Tsukuba, for large-scale numerical simulations in computational physics, especially those of lattice QCD. The CP-PACS computer has been in full operation for physics computations since October 1996. In this article we describe the chronology of the development, the hardware and software characteristics of the computer, and its performance for lattice QCD simulations.

  7. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  8. Computational Analysis of a Prototype Martian Rotorcraft Experiment

    NASA Technical Reports Server (NTRS)

    Corfeld, Kelly J.; Strawn, Roger C.; Long, Lyle N.

    2002-01-01

    This paper presents Reynolds-averaged Navier-Stokes calculations for a prototype Martian rotorcraft. The computations are intended for comparison with an ongoing Mars rotor hover test at NASA Ames Research Center. These computational simulations present a new and challenging problem, since rotors that operate on Mars will experience a unique low Reynolds number and high Mach number environment. Computed results for the 3-D rotor differ substantially from 2-D sectional computations in that the 3-D results exhibit a stall delay phenomenon caused by rotational forces along the blade span. Computational results have yet to be compared to experimental data, but computed performance predictions match the experimental design goals fairly well. In addition, the computed results provide a high level of detail in the rotor wake and blade surface aerodynamics. These details provide an important supplement to the expected experimental performance data.

  9. EOS MLS Science Data Processing System: A Description of Architecture and Capabilities

    NASA Technical Reports Server (NTRS)

    Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.

    2006-01-01

    This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.

  10. Silicon photonics for high-performance interconnection networks

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr

    2011-12-01

    We assert in the course of this work that silicon photonics has the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems, and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. This work showcases that chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, enable unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of this work, we demonstrate such feasibility of waveguides, modulators, switches, and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. Furthermore, we leverage the unique properties of available silicon photonic materials to create novel silicon photonic devices, subsystems, network topologies, and architectures to enable unprecedented performance of these photonic interconnection networks and computing systems. We show that the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. Furthermore, we explore the immense potential of all-optical functionalities implemented using parametric processing in the silicon platform, demonstrating unique methods that have the ability to revolutionize computation and communication. Silicon photonics enables new sets of opportunities that we can leverage for performance gains, as well as new sets of challenges that we must solve. Leveraging its inherent compatibility with standard fabrication techniques of the semiconductor industry, combined with its capability of dense integration with advanced microelectronics, silicon photonics also offers a clear path toward commercialization through low-cost mass-volume production. Combining empirical validations of feasibility, demonstrations of massive performance gains in large-scale systems, and the potential for commercial penetration of silicon photonics, the impact of this work will become evident in the many decades that follow.

  11. Real-Time Data Warehousing and On-Line Analytical Processing at Aberdeen Test Center’s Distributed Center

    DTIC Science & Technology

    2005-12-01

    data collected via on-board instrumentation -VxWorks based computer. Each instrument produces a continuous time history record of up to 250...data in multidimensional hierarchies and views. UGC 2005 Institute a high performance data warehouse • PostgreSQL 7.4 installed on dedicated filesystem

  12. [Activities of Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  13. Accelerating MP2C dispersion corrections for dimers and molecular crystals

    NASA Astrophysics Data System (ADS)

    Huang, Yuanhang; Shao, Yihan; Beran, Gregory J. O.

    2013-06-01

    The MP2C dispersion correction of Pitonak and Hesselmann [J. Chem. Theory Comput. 6, 168 (2010)], 10.1021/ct9005882 substantially improves the performance of second-order Møller-Plesset perturbation theory for non-covalent interactions, albeit with non-trivial computational cost. Here, the MP2C correction is computed in a monomer-centered basis instead of a dimer-centered one. When applied to a single dimer MP2 calculation, this change accelerates the MP2C dispersion correction several-fold while introducing only trivial new errors. More significantly, in the context of fragment-based molecular crystal studies, combination of the new monomer basis algorithm and the periodic symmetry of the crystal reduces the cost of computing the dispersion correction by two orders of magnitude. This speed-up reduces the MP2C dispersion correction calculation from a significant computational expense to a negligible one in crystals like aspirin or oxalyl dihydrazide, without compromising accuracy.

  14. Avoiding Defect Nucleation during Equilibration in Molecular Dynamics Simulations with ReaxFF

    DTIC Science & Technology

    2015-04-01

    respectively. All simulations are performed using the LAMMPS computer code.12 2 Fig. 1 a) Initial and b) final configurations of the molecular centers...Plimpton S. Fast parallel algorithms for short-range molecular dynamics. Comput J Phys. 1995;117:1–19. (Software available at http:// lammps .sandia.gov

  15. Comparing levels of school performance to science teachers' reports on knowledge/skills, instructional use and student use of computers

    NASA Astrophysics Data System (ADS)

    Kerr, Rebecca

    The purpose of this descriptive quantitative and basic qualitative study was to examine fifth and eighth grade science teachers' responses, perceptions of the role of technology in the classroom, and how they felt that computer applications, tools, and the Internet influence student understanding. The purposeful sample included survey and interview responses from fifth grade and eighth grade general and physical science teachers. Even though they may not be generalizable to other teachers or classrooms due to a low response rate, findings from this study indicated teachers with fewer years of teaching science had a higher level of computer use but less computer access, especially for students, in the classroom. Furthermore, teachers' choice of professional development moderated the relationship between the level of school performance and teachers' knowledge/skills, with the most positive relationship being with workshops that occurred outside of the school. Eighteen interviews revealed that teachers perceived the role of technology in classroom instruction mainly as teacher-centered and supplemental, rather than student-centered activities.

  16. Navier-Stokes and Comprehensive Analysis Performance Predictions of the NREL Phase VI Experiment

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Burklund, Michael D.; Johnson, Wayne

    2003-01-01

    A vortex lattice code, CAMRAD II, and a Reynolds-Averaged Navier-Stoke code, OVERFLOW-D2, were used to predict the aerodynamic performance of a two-bladed horizontal axis wind turbine. All computations were compared with experimental data that was collected at the NASA Ames Research Center 80- by 120-Foot Wind Tunnel. Computations were performed for both axial as well as yawed operating conditions. Various stall delay models and dynamics stall models were used by the CAMRAD II code. Comparisons between the experimental data and computed aerodynamic loads show that the OVERFLOW-D2 code can accurately predict the power and spanwise loading of a wind turbine rotor.

  17. Thermohydrodynamic Analysis of Cryogenic Liquid Turbulent Flow Fluid Film Bearings

    NASA Technical Reports Server (NTRS)

    San Andres, Luis

    1996-01-01

    This report describes a thermohydrodynamic analysis and computer programs for the prediction of the static and dynamic force response of fluid film bearings for cryogenic applications. The research performed addressed effectively the most important theoretical and practical issues related to the operation and performance of cryogenic fluid film bearings. Five computer codes have been licensed by the Texas A&M University to NASA centers and contractors and a total of 14 technical papers have been published.

  18. iDASH: integrating data for analysis, anonymization, and sharing

    PubMed Central

    Bafna, Vineet; Boxwala, Aziz A; Chapman, Brian E; Chapman, Wendy W; Chaudhuri, Kamalika; Day, Michele E; Farcas, Claudiu; Heintzman, Nathaniel D; Jiang, Xiaoqian; Kim, Hyeoneui; Kim, Jihoon; Matheny, Michael E; Resnic, Frederic S; Vinterbo, Staal A

    2011-01-01

    iDASH (integrating data for analysis, anonymization, and sharing) is the newest National Center for Biomedical Computing funded by the NIH. It focuses on algorithms and tools for sharing data in a privacy-preserving manner. Foundational privacy technology research performed within iDASH is coupled with innovative engineering for collaborative tool development and data-sharing capabilities in a private Health Insurance Portability and Accountability Act (HIPAA)-certified cloud. Driving Biological Projects, which span different biological levels (from molecules to individuals to populations) and focus on various health conditions, help guide research and development within this Center. Furthermore, training and dissemination efforts connect the Center with its stakeholders and educate data owners and data consumers on how to share and use clinical and biological data. Through these various mechanisms, iDASH implements its goal of providing biomedical and behavioral researchers with access to data, software, and a high-performance computing environment, thus enabling them to generate and test new hypotheses. PMID:22081224

  19. iDASH: integrating data for analysis, anonymization, and sharing.

    PubMed

    Ohno-Machado, Lucila; Bafna, Vineet; Boxwala, Aziz A; Chapman, Brian E; Chapman, Wendy W; Chaudhuri, Kamalika; Day, Michele E; Farcas, Claudiu; Heintzman, Nathaniel D; Jiang, Xiaoqian; Kim, Hyeoneui; Kim, Jihoon; Matheny, Michael E; Resnic, Frederic S; Vinterbo, Staal A

    2012-01-01

    iDASH (integrating data for analysis, anonymization, and sharing) is the newest National Center for Biomedical Computing funded by the NIH. It focuses on algorithms and tools for sharing data in a privacy-preserving manner. Foundational privacy technology research performed within iDASH is coupled with innovative engineering for collaborative tool development and data-sharing capabilities in a private Health Insurance Portability and Accountability Act (HIPAA)-certified cloud. Driving Biological Projects, which span different biological levels (from molecules to individuals to populations) and focus on various health conditions, help guide research and development within this Center. Furthermore, training and dissemination efforts connect the Center with its stakeholders and educate data owners and data consumers on how to share and use clinical and biological data. Through these various mechanisms, iDASH implements its goal of providing biomedical and behavioral researchers with access to data, software, and a high-performance computing environment, thus enabling them to generate and test new hypotheses.

  20. Prelaunch testing of the GEOS-3 laser reflector array

    NASA Technical Reports Server (NTRS)

    Minott, P. O.; Fitzmaurice, M. W.; Abshire, J. B.; Rowe, H. E.

    1978-01-01

    The prelaunch testing performed on the Geos-3 laser reflector array before launch was used to determine the lidar cross section of the array and the distance of the center of gravity of the satellite from the center of gravity of reflected laser pulses as a function of incidence angle. Experimental data are compared to computed results.

  1. KSC-99pp1190

    NASA Image and Video Library

    1999-10-07

    After the ribbon-cutting opening the Consolidated Support Operations Center at ROCC, Cape Canaveral Air Station, guests look at information on the computer screen during a demonstration. Among those standing are (left to right) Barbara White, supervisor, Mission Support; Ed Gormel, executive director, Joint Performance Management Office; KSC Center Director Roy Bridges; and Sam Gutierrez (white shirt), Human Resources, Space Gateway Support

  2. KENNEDY SPACE CENTER, FLA. - Boeing workers perform a 3D digital scan of the actuator on the table. At left is Dan Clark. At right are Alden Pitard (seated at computer) and John Macke, from Boeing, St. Louis. . There are two actuators per engine on the Shuttle, one for pitch motion and one for yaw motion. The Space Shuttle Main Engine hydraulic servoactuators are used to gimbal the main engine.

    NASA Image and Video Library

    2003-09-03

    KENNEDY SPACE CENTER, FLA. - Boeing workers perform a 3D digital scan of the actuator on the table. At left is Dan Clark. At right are Alden Pitard (seated at computer) and John Macke, from Boeing, St. Louis. . There are two actuators per engine on the Shuttle, one for pitch motion and one for yaw motion. The Space Shuttle Main Engine hydraulic servoactuators are used to gimbal the main engine.

  3. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  4. Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.

    2014-12-01

    The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.

  5. Early experiences in developing and managing the neuroscience gateway.

    PubMed

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas T

    2015-02-01

    The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway.

  6. Early experiences in developing and managing the neuroscience gateway

    PubMed Central

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas. T.

    2015-01-01

    SUMMARY The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway. PMID:26523124

  7. An adaptive angle-doppler compensation method for airborne bistatic radar based on PAST

    NASA Astrophysics Data System (ADS)

    Hang, Xu; Jun, Zhao

    2018-05-01

    Adaptive angle-Doppler compensation method extract the requisite information based on the data itself adaptively, thus avoiding the problem of performance degradation caused by inertia system error. However, this method requires estimation and egiendecomposition of sample covariance matrix, which has a high computational complexity and limits its real-time application. In this paper, an adaptive angle Doppler compensation method based on projection approximation subspace tracking (PAST) is studied. The method uses cyclic iterative processing to quickly estimate the positions of the spectral center of the maximum eigenvector of each range cell, and the computational burden of matrix estimation and eigen-decompositon is avoided, and then the spectral centers of all range cells is overlapped by two dimensional compensation. Simulation results show the proposed method can effectively reduce the no homogeneity of airborne bistatic radar, and its performance is similar to that of egien-decomposition algorithms, but the computation load is obviously reduced and easy to be realized.

  8. The growth of the UniTree mass storage system at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen

    1993-01-01

    In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw the growth of near-online data from nil to nearly three terabytes, a doubling of the number of CPU's on the facility's Cray YMP (the primary data source for UniTree), and the necessity for an aggressive regimen for repacking sparse tapes and hierarchical 'vaulting' of old files to freestanding tape. Connectivity was enhanced as well with the addition of UltraNet HiPPI. This paper describes the increasing demands placed on the storage system's performance and throughput that resulted from the significant augmentation of compute-server processor power and network speed.

  9. Laser Spot Detection Based on Reaction Diffusion.

    PubMed

    Vázquez-Otero, Alejandro; Khikhlukha, Danila; Solano-Altamirano, J M; Dormido, Raquel; Duro, Natividad

    2016-03-01

    Center-location of a laser spot is a problem of interest when the laser is used for processing and performing measurements. Measurement quality depends on correctly determining the location of the laser spot. Hence, improving and proposing algorithms for the correct location of the spots are fundamental issues in laser-based measurements. In this paper we introduce a Reaction Diffusion (RD) system as the main computational framework for robustly finding laser spot centers. The method presented is compared with a conventional approach for locating laser spots, and the experimental results indicate that RD-based computation generates reliable and precise solutions. These results confirm the flexibility of the new computational paradigm based on RD systems for addressing problems that can be reduced to a set of geometric operations.

  10. 1999 NCCS Highlights

    NASA Technical Reports Server (NTRS)

    Bennett, Jerome (Technical Monitor)

    2002-01-01

    The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.

  11. The Effort to Reduce a Muscle Fatigue Through Gymnastics Relaxation and Ergonomic Approach for Computer Users in Central Building State University of Medan

    NASA Astrophysics Data System (ADS)

    Gultom, Syamsul; Darma Sitepu, Indra; Hasibuan, Nurman

    2018-03-01

    Fatigue due to long and continuous computer usage can lead to problems of dominant fatigue associated with decreased performance and work motivation. Specific targets in the first phase have been achieved in this research such as: (1) Identified complaints on workers using computers, using the Bourdon Wiersma test kit. (2) Finding the right relaxation & work posture draft for a solution to reduce muscle fatigue in computer-based workers. The type of research used in this study is research and development method which aims to produce the products or refine existing products. The final product is a prototype of back-holder, monitoring filter and arranging a relaxation exercise as well as the manual book how to do this while in front of the computer to lower the fatigue level for computer users in Unimed’s Administration Center. In the first phase, observations and interviews have been conducted and identified the level of fatigue on the employees of computer users at Uniemd’s Administration Center using Bourdon Wiersma test and has obtained the following results: (1) The average velocity time of respondents in BAUK, BAAK and BAPSI after working with the value of interpretation of the speed obtained value of 8.4, WS 13 was in a good enough category, (2) The average of accuracy of respondents in BAUK, in BAAK and in BAPSI after working with interpretation value accuracy obtained Value of 5.5, WS 8 was in doubt-category. This result shows that computer users experienced a significant tiredness at the Unimed Administration Center, (3) the consistency of the average of the result in measuring tiredness level on computer users in Unimed’s Administration Center after working with values in consistency of interpretation obtained Value of 5.5 with WS 8 was put in a doubt-category, which means computer user in The Unimed Administration Center suffered an extreme fatigue. In phase II, based on the results of the first phase in this research, the researcher offers solutions such as the prototype of Back-Holder, monitoring filter, and design a proper relaxation exercise to reduce the fatigue level. Furthermore, in in order to maximize the exercise itself, a manual book will be given to employees whom regularly work in front of computers at Unimed’s Administration Center

  12. RIACS

    NASA Technical Reports Server (NTRS)

    Moore, Robert C.

    1998-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities that serves as a bridge between NASA and the academic community. Under a five-year co-operative agreement with NASA, research at RIACS is focused on areas that are strategically enabling to the Ames Research Center's role as NASA's Center of Excellence for Information Technology. Research is carried out by a staff of full-time scientist,augmented by visitors, students, post doctoral candidates and visiting university faculty. The primary mission of RIACS is charted to carry out research and development in computer science. This work is devoted in the main to tasks that are strategically enabling with respect to NASA's bold mission in space exploration and aeronautics. There are three foci for this work: Automated Reasoning. Human-Centered Computing. and High Performance Computing and Networking. RIACS has the additional goal of broadening the base of researcher in these areas of importance to the nation's space and aeronautics enterprises. Through its visiting scientist program, RIACS facilitates the participation of university-based researchers, including both faculty and students, in the research activities of NASA and RIACS. RIACS researchers work in close collaboration with NASA computer scientists on projects such as the Remote Agent Experiment on Deep Space One mission, and Super-Resolution Surface Modeling.

  13. Computer support for cooperative tasks in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Moore, Mike

    1994-01-01

    Traditionally, spacecraft management has been performed by fixed teams of operators in Mission Operations Centers. The team cooperatively: (1) ensures that payload(s) on spacecraft perform their work; and (2) maintains the health and safety of the spacecraft through commanding and monitoring the spacecraft's subsystems. In the future, the task demands will increase and overload the operators. This paper describes the traditional spacecraft management environment and describes a new concept in which groupware will be used to create a Virtual Mission Operations Center. Groupware tools will be used to better utilize available resources through increased automation and dynamic sharing of personnel among missions.

  14. The Effect of Color Choice on Learner Interpretation of a Cosmology Visualization

    ERIC Educational Resources Information Center

    Buck, Zoe

    2013-01-01

    As we turn more and more to high-end computing to understand the Universe at cosmological scales, dynamic visualizations of simulations will take on a vital role as perceptual and cognitive tools. In collaboration with the Adler Planetarium and University of California High-Performance AstroComputing Center (UC-HiPACC), I am interested in better…

  15. Using SPEEDES to simulate the blue gene interconnect network

    NASA Technical Reports Server (NTRS)

    Springer, P.; Upchurch, E.

    2003-01-01

    JPL and the Center for Advanced Computer Architecture (CACR) is conducting application and simulation analyses of BG/L in order to establish a range of effectiveness for the Blue Gene/L MPP architecture in performing important classes of computations and to determine the design sensitivity of the global interconnect network in support of real world ASCI application execution.

  16. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  17. New frontiers in design synthesis

    NASA Technical Reports Server (NTRS)

    Goldin, D. S.; Venneri, S. L.; Noor, A. K.

    1999-01-01

    The Intelligent Synthesis Environment (ISE), which is one of the major strategic technologies under development at NASA centers and the University of Virginia, is described. One of the major objectives of ISE is to significantly enhance the rapid creation of innovative affordable products and missions. ISE uses a synergistic combination of leading-edge technologies, including high performance computing, high capacity communications and networking, human-centered computing, knowledge-based engineering, computational intelligence, virtual product development, and product information management. The environment will link scientists, design teams, manufacturers, suppliers, and consultants who participate in the mission synthesis as well as in the creation and operation of the aerospace system. It will radically advance the process by which complex science missions are synthesized, and high-tech engineering Systems are designed, manufactured and operated. The five major components critical to ISE are human-centered computing, infrastructure for distributed collaboration, rapid synthesis and simulation tools, life cycle integration and validation, and cultural change in both the engineering and science creative process. The five components and their subelements are described. Related U.S. government programs are outlined and the future impact of ISE on engineering research and education is discussed.

  18. High Efficiency Photonic Switch for Data Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaComb, Lloyd J.; Bablumyan, Arkady; Ordyan, Armen

    2016-12-06

    The worldwide demand for instant access to information is driving internet growth rates above 50% annually. This rapid growth is straining the resources and architectures of existing data centers, metro networks and high performance computer centers. If the current business as usual model continues, data centers alone will require 400TWhr of electricity by 2020. In order to meet the challenges of a faster and more cost effective data centers, metro networks and supercomputing facilities, we have demonstrated a new type of optical switch that will support transmissions speeds up to 1Tb/s, and requires significantly less energy per bit than

  19. Computer program for flat sector thrust bearing performance

    NASA Technical Reports Server (NTRS)

    Presler, A. F.; Etsion, I.

    1977-01-01

    A versatile computer program is presented which achieves a rapid, numerical solution of the Reynolds equation for a flat sector thrust pad bearing with either compressible or liquid lubricants. Program input includes a range in values of the geometric and operating parameters of the sector bearing. Performance characteristics are obtained from the calculated bearing pressure distribution. These are the load capacity, center of pressure coordinates, frictional energy dissipation, and flow rates of liquid lubricant across the bearing edges. Two sample problems are described.

  20. Canal transportation and centering ability of protaper and self-adjusting file system in long oval canals: An ex-vivo cone-beam computed tomography analysis.

    PubMed

    Shah, Dipali Yogesh; Wadekar, Swati Ishwara; Dadpe, Ashwini Manish; Jadhav, Ganesh Ranganath; Choudhary, Lalit Jayant; Kalra, Dheeraj Deepak

    2017-01-01

    The purpose of this study was to compare and evaluate the shaping ability of ProTaper (PT) and Self-Adjusting File (SAF) system using cone-beam computed tomography (CBCT) to assess their performance in oval-shaped root canals. Sixty-two mandibular premolars with single oval canals were divided into two experimental groups ( n = 31) according to the systems used: Group I - PT and Group II - SAF. Canals were evaluated before and after instrumentation using CBCT to assess centering ratio and canal transportation at three levels. Data were statistically analyzed using one-way analysis of variance, post hoc Tukey's test, and t -test. The SAF showed better centering ability and lesser canal transportation than the PT only in the buccolingual plane at 6 and 9 mm levels. The shaping ability of the PT was best in the apical third in both the planes. The SAF had statistically significant better centering and lesser canal transportation in the buccolingual as compared to the mesiodistal plane at the middle and coronal levels. The SAF produced significantly less transportation and remained centered than the PT at the middle and coronal levels in the buccolingual plane of oval canals. In the mesiodistal plane, the performance of both the systems was parallel.

  1. A parallel-processing approach to computing for the geographic sciences

    USGS Publications Warehouse

    Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Haga, Jim; Maddox, Brian; Feller, Mark

    2001-01-01

    The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting research into various areas, such as advanced computer architecture, algorithms to meet the processing needs for real-time image and data processing, the creation of custom datasets from seamless source data, rapid turn-around of products for emergency response, and support for computationally intense spatial and temporal modeling.

  2. Research Institute for Advanced Computer Science: Annual Report October 1998 through September 1999

    NASA Technical Reports Server (NTRS)

    Leiner, Barry M.; Gross, Anthony R. (Technical Monitor)

    1999-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center (ARC). It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. ARC has been designated NASA's Center of Excellence in Information Technology. In this capacity, ARC is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA ARC and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, and visiting scientist programs, designed to encourage and facilitate collaboration between the university and NASA information technology research communities.

  3. Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, and visiting scientist programs, designed to encourage and facilitate collaboration between the university and NASA information technology research communities.

  4. The Internet and Computer User Profile: a questionnaire for determining intervention targets in occupational therapy at mental health vocational centers.

    PubMed

    Regev, Sivan; Hadas-Lidor, Noami; Rosenberg, Limor

    2016-08-01

    In this study, the assessment tool "Internet and Computer User Profile" questionnaire (ICUP) is presented and validated. It was developed in order to gather information for setting intervention goals to meet current demands. Sixty-eight subjects aged 23-68 participated in the study. The study group (n = 28) was sampled from two vocational centers. The control group consisted of 40 participants from the general population that were sampled by convenience sampling based on the demographics of the study group. Subjects from both groups answered the ICUP questionnaire. Subjects of the study group answered the General Self- Efficacy (GSE) questionnaire and performed the Assessment of Computer Task Performance (ACTP) test in order to examine the convergent validity of the ICUP. Twenty subjects from both groups retook the ICUP questionnaire in order to obtain test-retest results. Differences between groups were tested using multiple analysis of variance (MANOVA) tests. Pearson and Spearman's tests were used for calculating correlations. Cronbach's alpha coefficient and k equivalent were used to assess internal consistency. The results indicate that the questionnaire is valid and reliable. They emphasize that the layout of the ICUP items facilitates in making a comprehensive examination of the client's perception regarding his participation in computer and internet activities. Implications for Rehabiliation The assessment tool "Internet and Computer User Profile" (ICUP) questionnaire is a novel assessment tool that evaluates operative use and individual perception of computer activities. The questionnaire is valid and reliable for use with participants of vocational centers dealing with mental illness. It is essential to facilitate access to computers for people with mental illnesses, seeing that they express similar interest in computers and internet as people from the general population of the same age. Early intervention will be particularly effective for young adults dealing with mental illness, since the digital gap between them and young people in general is relatively small.

  5. Current use of imaging and electromagnetic source localization procedures in epilepsy surgery centers across Europe.

    PubMed

    Mouthaan, Brian E; Rados, Matea; Barsi, Péter; Boon, Paul; Carmichael, David W; Carrette, Evelien; Craiu, Dana; Cross, J Helen; Diehl, Beate; Dimova, Petia; Fabo, Daniel; Francione, Stefano; Gaskin, Vladislav; Gil-Nagel, Antonio; Grigoreva, Elena; Guekht, Alla; Hirsch, Edouard; Hecimovic, Hrvoje; Helmstaedter, Christoph; Jung, Julien; Kalviainen, Reetta; Kelemen, Anna; Kimiskidis, Vasilios; Kobulashvili, Teia; Krsek, Pavel; Kuchukhidze, Giorgi; Larsson, Pål G; Leitinger, Markus; Lossius, Morten I; Luzin, Roman; Malmgren, Kristina; Mameniskiene, Ruta; Marusic, Petr; Metin, Baris; Özkara, Cigdem; Pecina, Hrvoje; Quesada, Carlos M; Rugg-Gunn, Fergus; Rydenhag, Bertil; Ryvlin, Philippe; Scholly, Julia; Seeck, Margitta; Staack, Anke M; Steinhoff, Bernhard J; Stepanov, Valentin; Tarta-Arsene, Oana; Trinka, Eugen; Uzan, Mustafa; Vogt, Viola L; Vos, Sjoerd B; Vulliémoz, Serge; Huiskamp, Geertjan; Leijten, Frans S S; Van Eijsden, Pieter; Braun, Kees P J

    2016-05-01

    In 2014 the European Union-funded E-PILEPSY project was launched to improve awareness of, and accessibility to, epilepsy surgery across Europe. We aimed to investigate the current use of neuroimaging, electromagnetic source localization, and imaging postprocessing procedures in participating centers. A survey on the clinical use of imaging, electromagnetic source localization, and postprocessing methods in epilepsy surgery candidates was distributed among the 25 centers of the consortium. A descriptive analysis was performed, and results were compared to existing guidelines and recommendations. Response rate was 96%. Standard epilepsy magnetic resonance imaging (MRI) protocols are acquired at 3 Tesla by 15 centers and at 1.5 Tesla by 9 centers. Three centers perform 3T MRI only if indicated. Twenty-six different MRI sequences were reported. Six centers follow all guideline-recommended MRI sequences with the proposed slice orientation and slice thickness or voxel size. Additional sequences are used by 22 centers. MRI postprocessing methods are used in 16 centers. Interictal positron emission tomography (PET) is available in 22 centers; all using 18F-fluorodeoxyglucose (FDG). Seventeen centers perform PET postprocessing. Single-photon emission computed tomography (SPECT) is used by 19 centers, of which 15 perform postprocessing. Four centers perform neither PET nor SPECT in children. Seven centers apply magnetoencephalography (MEG) source localization, and nine apply electroencephalography (EEG) source localization. Fourteen combinations of inverse methods and volume conduction models are used. We report a large variation in the presurgical diagnostic workup among epilepsy surgery centers across Europe. This diversity underscores the need for high-quality systematic reviews, evidence-based recommendations, and harmonization of available diagnostic presurgical methods. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.

  6. Florida Center for Reading Research (FCRR) Reading Assessment (FRA): Kindergarten to Grade 2. Technical Manual

    ERIC Educational Resources Information Center

    Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris

    2015-01-01

    The grades K-2 Florida Center for Reading Research (FCRR) Reading Assessment (FRA) consists of computer-adaptive alphabetic and oral language screening tasks that provide a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 40th percentile) on the word reading (in kindergarten) or reading comprehension (in grades…

  7. Florida Center for Reading Research (FCRR) Reading Assessment (FRA): Grades 3 through 12. Technical Manual

    ERIC Educational Resources Information Center

    Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris

    2015-01-01

    The Florida Center for Reading Research (FCRR) Reading Assessment (FRA) consists of computer-adaptive reading comprehension and oral language screening tasks that provide measures to track growth over time, as well as a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 50th percentile) on the reading comprehension…

  8. Optic Glomeruli: Biological Circuits that Compute Target Identity

    DTIC Science & Technology

    2013-11-01

    vitripennis. Insect Mol. Biol. Suppl. 1:121-36. Strausfeld NJ. 2012. Arthropod Brains. Evolution , Functional Elegance and Historical Significance. Harvard...Neuroscience and Center for Insect Science University of Arizona Tucson, AZ 85721 Contract No. FA8651-10-1-0001 November 2013 FINAL REPORT...PERFORMING ORGANIZATION REPORT NUMBER Department of Neuroscience and Center for Insect Science University of Arizona Tucson, AZ 85721

  9. Data centers as dispatchable loads to harness stranded power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kibaek; Yang, Fan; Zavala, Victor M.

    Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less

  10. Data centers as dispatchable loads to harness stranded power

    DOE PAGES

    Kim, Kibaek; Yang, Fan; Zavala, Victor M.; ...

    2016-07-20

    Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less

  11. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1980-01-01

    The computational techniques are described which are utilized at Lewis Research Center to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. Cycle performance, and engine weight can be calculated along with costs and installation effects as opposed to fuel consumption alone. Almost any conceivable turbine engine cycle can be studied. These computer codes are: NNEP, WATE, LIFCYC, INSTAL, and POD DRG. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight and cost for representative types of aircraft and missions.

  12. Experimental Investigation of Project Orion Crew Exploration Vehicle Aeroheating in AEDC Tunnel 9

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Horvath, Thomas J.; Berger, Karen T.; Lillard, Randolph P.; Kirk, Benjamin S.; Coblish, Joseph J.; Norris, Joseph D.

    2008-01-01

    An investigation of the aeroheating environment of the Project Orion Crew Entry Vehicle has been performed in the Arnold Engineering Development Center Tunnel 9. The goals of this test were to measure turbulent heating augmentation levels on the heat shield and to obtain high-fidelity heating data for assessment of computational fluid dynamics methods. Laminar and turbulent predictions were generated for all wind tunnel test conditions and comparisons were performed with the data for the purpose of helping to define uncertainty margins for the computational method. Data from both the wind tunnel test and the computational study are presented herein.

  13. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  14. Design and deployment of an elastic network test-bed in IHEP data center based on SDN

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Qi, Fazhi; Chen, Gang

    2017-10-01

    High energy physics experiments produce huge amounts of raw data, while because of the sharing characteristics of the network resources, there is no guarantee of the available bandwidth for each experiment which may cause link congestion problems. On the other side, with the development of cloud computing technologies, IHEP have established a cloud platform based on OpenStack which can ensure the flexibility of the computing and storage resources, and more and more computing applications have been deployed on virtual machines established by OpenStack. However, under the traditional network architecture, network capability can’t be required elastically, which becomes the bottleneck of restricting the flexible application of cloud computing. In order to solve the above problems, we propose an elastic cloud data center network architecture based on SDN, and we also design a high performance controller cluster based on OpenDaylight. In the end, we present our current test results.

  15. 14 CFR 1260.35 - Investigative Requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... (a) NASA reserves the right to perform security checks and to deny or restrict access to a NASA Center, facility, or computer system, or to NASA technical information, as NASA deems appropriate. To the...

  16. 14 CFR 1260.35 - Investigative Requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... (a) NASA reserves the right to perform security checks and to deny or restrict access to a NASA Center, facility, or computer system, or to NASA technical information, as NASA deems appropriate. To the...

  17. 14 CFR § 1260.35 - Investigative Requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... January 2004 (a) NASA reserves the right to perform security checks and to deny or restrict access to a NASA Center, facility, or computer system, or to NASA technical information, as NASA deems appropriate...

  18. 14 CFR 1260.35 - Investigative Requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (a) NASA reserves the right to perform security checks and to deny or restrict access to a NASA Center, facility, or computer system, or to NASA technical information, as NASA deems appropriate. To the...

  19. 14 CFR 1260.35 - Investigative Requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... (a) NASA reserves the right to perform security checks and to deny or restrict access to a NASA Center, facility, or computer system, or to NASA technical information, as NASA deems appropriate. To the...

  20. Parallel computing of a climate model on the dawn 1000 by domain decomposition method

    NASA Astrophysics Data System (ADS)

    Bi, Xunqiang

    1997-12-01

    In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.

  1. High-Performance Computing Data Center Waste Heat Reuse | Computational

    Science.gov Websites

    control room With heat exchangers, heat energy in the energy recovery water (ERW) loop becomes available to heat the facility's process hot water (PHW) loop. Once heated, the PHW loop supplies: Active loop in the courtyard of the ESIF's main entrance District heating loop: If additional heat is needed

  2. A Day at the Shopping Mall: Educational Computing in Action. Techniques.

    ERIC Educational Resources Information Center

    Gray, Robert A.; Gray, Dianne E.

    1988-01-01

    This report relates the experiences of a teacher who conducted a graduate computer education class at a shopping center in Dallas, Texas. The activity asked each member of the class to spend up to 5,000 mythical dollars on a hardware system and appropriate software to facilitate his/her current job performance. Students could "shop" in three…

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less

  4. Unsteady Full Annulus Simulations of a Transonic Axial Compressor Stage

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Hathaway, Michael D.; Chen, Jen-Ping

    2009-01-01

    Two recent research endeavors in turbomachinery at NASA Glenn Research Center have focused on compression system stall inception and compression system aerothermodynamic performance. Physical experiment and computational research are ongoing in support of these research objectives. TURBO, an unsteady, three-dimensional, Navier-Stokes computational fluid dynamics code commissioned and developed by NASA, has been utilized, enhanced, and validated in support of these endeavors. In the research which follows, TURBO is shown to accurately capture compression system flow range-from choke to stall inception-and also to accurately calculate fundamental aerothermodynamic performance parameters. Rigorous full-annulus calculations are performed to validate TURBO s ability to simulate the unstable, unsteady, chaotic stall inception process; as part of these efforts, full-annulus calculations are also performed at a condition approaching choke to further document TURBO s capabilities to compute aerothermodynamic performance data and support a NASA code assessment effort.

  5. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.

  6. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  7. Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry

    NASA Technical Reports Server (NTRS)

    Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul

    2003-01-01

    Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.

  8. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  9. Functional and performance requirements of the next NOAA-Kasas City computer system

    NASA Technical Reports Server (NTRS)

    Mosher, F. R.

    1985-01-01

    The development of the Advanced Weather Interactive Processing System for the 1990's (AWIPS-90) will result in more timely and accurate forecasts with improved cost effectiveness. As part of the AWIPS-90 initiative, the National Meteorological Center (NMC), the National Severe Storms Forecast Center (NSSFC), and the National Hurricane Center (NHC) are to receive upgrades of interactive processing systems. This National Center Upgrade program will support the specialized inter-center communications, data acquisition, and processing needs of these centers. The missions, current capabilities and general functional requirements for the upgrade to the NSSFC are addressed. System capabilities are discussed along with the requirements for the upgraded system.

  10. Converged photonic data storage and switch platform for exascale disaggregated data centers

    NASA Astrophysics Data System (ADS)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  11. Energy Systems Integration Partnerships: NREL + Sandia + Johnson Controls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NREL and Sandia National Laboratories partnered with Johnson Controls to deploy the company's BlueStream Hybrid Cooling System at ESIF's high-performance computing data center to reduce water consumption seen in evaporative cooling towers.

  12. Data Network Weather Service Reporting - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Frey

    2012-08-30

    A final report is made of a three-year effort to develop a new forecasting paradigm for computer network performance. This effort was made in co-ordination with Fermi Lab's construction of e-Weather Center.

  13. The Effect of Formative Assessments on Language Performance

    ERIC Educational Resources Information Center

    Radford, Brian W.

    2014-01-01

    This study sought to improve the language learning outcomes at the Missionary Training Center in Provo, Utah. Young men and women between the ages of 19-24 are taught a foreign language in an accelerated environment. In an effort to improve learning outcomes, computer-based practice and teaching of language performance criteria were provided to…

  14. Berkeley Lab - Materials Sciences Division

    Science.gov Websites

    Computational Study of Excited-State Phenomena in Energy Materials Center for X-ray Optics MSD Facilities Ion and Materials Physics Scattering and Instrumentation Science Centers Center for Computational Study of Sciences Centers Center for Computational Study of Excited-State Phenomena in Energy Materials Center for X

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chevallier, J.J.; Quetier, F.P.; Marshall, D.W.

    Sedco Forex has developed an integrated computer system to enhance the technical performance of the company at various operational levels and to increase the understanding and knowledge of the drill crews. This paper describes the system and how it is used for recording and processing drilling data at the rig site, for associated technical analyses, and for well design, planning, and drilling performance studies at the operational centers. Some capabilities related to the statistical analysis of the company's operational records are also described, and future development of rig computing systems for drilling applications and management tasks is discussed.

  16. The Roots of Beowulf

    NASA Technical Reports Server (NTRS)

    Fischer, James R.

    2014-01-01

    The first Beowulf Linux commodity cluster was constructed at NASA's Goddard Space Flight Center in 1994 and its origins are a part of the folklore of high-end computing. In fact, the conditions within Goddard that brought the idea into being were shaped by rich historical roots, strategic pressures brought on by the ramp up of the Federal High-Performance Computing and Communications Program, growth of the open software movement, microprocessor performance trends, and the vision of key technologists. This multifaceted story is told here for the first time from the point of view of NASA project management.

  17. Gyrokinetic micro-turbulence simulations on the NERSC 16-way SMP IBM SP computer: experiences and performance results

    NASA Astrophysics Data System (ADS)

    Ethier, Stephane; Lin, Zhihong

    2001-10-01

    Earlier this year, the National Energy Research Scientific Computing center (NERSC) took delivery of the second most powerful computer in the world. With its 2,528 processors running at a peak performance of 1.5 GFlops, this IBM SP machine has a theoretical performance of almost 3.8 TFlops. To efficiently harness such computing power in one single code is not an easy task and requires a good knowledge of the computer's architecture. Here we present the steps that we followed to improve our gyrokinetic micro-turbulence code GTC in order to take advantage of the new 16-way shared memory nodes of the NERSC IBM SP. Performance results are shown as well as details about the improved mixed-mode MPI-OpenMP model that we use. The enhancements to the code allowed us to tackle much bigger problem sizes, getting closer to our goal of simulating an ITER-size tokamak with both kinetic ions and electrons.(This work is supported by DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by the DOE Fusion SciDAC Project.)

  18. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  19. Canal transportation and centering ability of protaper and self-adjusting file system in long oval canals: An ex-vivo cone-beam computed tomography analysis

    PubMed Central

    Shah, Dipali Yogesh; Wadekar, Swati Ishwara; Dadpe, Ashwini Manish; Jadhav, Ganesh Ranganath; Choudhary, Lalit Jayant; Kalra, Dheeraj Deepak

    2017-01-01

    Context and Aims: The purpose of this study was to compare and evaluate the shaping ability of ProTaper (PT) and Self-Adjusting File (SAF) system using cone-beam computed tomography (CBCT) to assess their performance in oval-shaped root canals. Materials and Methods: Sixty-two mandibular premolars with single oval canals were divided into two experimental groups (n = 31) according to the systems used: Group I – PT and Group II – SAF. Canals were evaluated before and after instrumentation using CBCT to assess centering ratio and canal transportation at three levels. Data were statistically analyzed using one-way analysis of variance, post hoc Tukey's test, and t-test. Results: The SAF showed better centering ability and lesser canal transportation than the PT only in the buccolingual plane at 6 and 9 mm levels. The shaping ability of the PT was best in the apical third in both the planes. The SAF had statistically significant better centering and lesser canal transportation in the buccolingual as compared to the mesiodistal plane at the middle and coronal levels. Conclusions: The SAF produced significantly less transportation and remained centered than the PT at the middle and coronal levels in the buccolingual plane of oval canals. In the mesiodistal plane, the performance of both the systems was parallel. PMID:28855757

  20. CPMIP: measurements of real computational performance of Earth system models in CMIP6

    NASA Astrophysics Data System (ADS)

    Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett

    2017-01-01

    A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).

  1. AHPCRC (Army High Performance Computing Rsearch Center) Bulletin. Volume 1, Issue 4

    DTIC Science & Technology

    2011-01-01

    Computational and Mathematical Engineering, Stanford University esgs@stanford.edu (650) 723-3764 Molecular Dynamics Models of Antimicrobial ...simulations using low-fidelity Reynolds-av- eraged models illustrate the limited predictive capabili- ties of these schemes. The predictions for scalar and...driving force. The AHPCRC group has used their models to predict nonuniform concentra- tion profiles across small channels as a result of variations

  2. PNNL streamlines energy-guzzling computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, Mary T.; Marquez, Andres

    In a room the size of a garage, two rows of six-foot-tall racks holding supercomputer hard drives sit back-to-back. Thin tubes and wires snake off the hard drives, slithering into the corners. Stepping between the rows, a rush of heat whips around you -- the air from fans blowing off processing heat. But walk farther in, between the next racks of hard drives, and the temperature drops noticeably. These drives are being cooled by a non-conducting liquid that runs right over the hardworking processors. The liquid carries the heat away in tubes, saving the air a few degrees. This ismore » the Energy Smart Data Center at Pacific Northwest National Laboratory. The bigger, faster, and meatier supercomputers get, the more energy they consume. PNNL's Andres Marquez has developed this test bed to learn how to train the behemoths in energy efficiency. The work will help supercomputers perform better as well. Processors have to keep cool or suffer from "thermal throttling," says Marquez. "That's the performance threshold where the computer is too hot to run well. That threshold is an industry secret." The center at EMSL, DOE's national scientific user facility at PNNL, harbors several ways of experimenting with energy usage. For example, the room's air conditioning is isolated from the rest of EMSL -- pipes running beneath the floor carry temperature-controlled water through heat exchangers to cooling towers outside. "We can test whether it's more energy efficient to cool directly on the processing chips or out in the water tower," says Marquez. The hard drives feed energy and temperature data to a network server running specially designed software that controls and monitors the data center. To test the center’s limits, the team runs the processors flat out – not only on carefully controlled test programs in the Energy Smart computers, but also on real world software from other EMSL research, such as regional weather forecasting models. Marquez's group is also developing "power aware computing", where the computer programs themselves perform calculations more energy efficiently. Maybe once computers get smart about energy, they'll have tips for their users.« less

  3. Human performance cognitive-behavioral modeling: a benefit for occupational safety.

    PubMed

    Gore, Brian F

    2002-01-01

    Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.

  4. JPL IGS Analysis Center Report, 2001-2003

    NASA Technical Reports Server (NTRS)

    Heflin, M. B.; Bar-Sever, Y. E.; Jefferson, D. C.; Meyer, R. F.; Newport, B. J.; Vigue-Rodi, Y.; Webb, F. H.; Zumberge, J. F.

    2004-01-01

    Three GPS orbit and clock products are currently provided by JPL for consideration by the IGS. Each differs in its latency and quality, with later results being more accurate. Results are typically available in both IGS and GIPSY formats via anonymous ftp. Current performance based on comparisons with the IGS final products is summarized. Orbit performance was determined by computing the 3D RMS difference between each JPL product and the IGS final orbits based on 15 minute estimates from the sp3 files. Clock performance was computed as the RMS difference after subtracting a linear trend based on 15 minute estimates from the sp3 files.

  5. Human performance cognitive-behavioral modeling: a benefit for occupational safety

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2002-01-01

    Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.

  6. Parallel computing in enterprise modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less

  7. Experiences with Transitioning Science Data Production from a Symmetric Multiprocessor Platform to a Linux Cluster Environment

    NASA Astrophysics Data System (ADS)

    Walter, R. J.; Protack, S. P.; Harris, C. J.; Caruthers, C.; Kusterer, J. M.

    2008-12-01

    NASA's Atmospheric Science Data Center at the NASA Langley Research Center performs all of the science data processing for the Multi-angle Imaging SpectroRadiometer (MISR) instrument. MISR is one of the five remote sensing instruments flying aboard NASA's Terra spacecraft. From the time of Terra launch in December 1999 until February 2008, all MISR science data processing was performed on a Silicon Graphics, Inc. (SGI) platform. However, dramatic improvements in commodity computing technology coupled with steadily declining project budgets during that period eventually made transitioning MISR processing to a commodity computing environment both feasible and necessary. The Atmospheric Science Data Center has successfully ported the MISR science data processing environment from the SGI platform to a Linux cluster environment. There were a multitude of technical challenges associated with this transition. Even though the core architecture of the production system did not change, the manner in which it interacted with underlying hardware was fundamentally different. In addition, there are more potential throughput bottlenecks in a cluster environment than there are in a symmetric multiprocessor environment like the SGI platform and each of these had to be addressed. Once all the technical issues associated with the transition were resolved, the Atmospheric Science Data Center had a MISR science data processing system with significantly higher throughput than the SGI platform at a fraction of the cost. In addition to the commodity hardware, free and open source software such as S4PM, Sun Grid Engine, PostgreSQL and Ganglia play a significant role in the new system. Details of the technical challenges and resolutions, software systems, performance improvements, and cost savings associated with the transition will be discussed. The Atmospheric Science Data Center in Langley's Science Directorate leads NASA's program for the processing, archival and distribution of Earth science data in the areas of radiation budget, clouds, aerosols, and tropospheric chemistry. The Data Center was established in 1991 to support NASA's Earth Observing System and the U.S. Global Change Research Program. It is unique among NASA data centers in the size of its archive, cutting edge computing technology, and full range of data services. For more information regarding ASDC data holdings, documentation, tools and services, visit http://eosweb.larc.nasa.gov

  8. Reduced-Order Modeling: Cooperative Research and Development at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Beran, Philip S.; Cesnik, Carlos E. S.; Guendel, Randal E.; Kurdila, Andrew; Prazenica, Richard J.; Librescu, Liviu; Marzocca, Piergiovanni; Raveh, Daniella E.

    2001-01-01

    Cooperative research and development activities at the NASA Langley Research Center (LaRC) involving reduced-order modeling (ROM) techniques are presented. Emphasis is given to reduced-order methods and analyses based on Volterra series representations, although some recent results using Proper Orthogonal Deco in position (POD) are discussed as well. Results are reported for a variety of computational and experimental nonlinear systems to provide clear examples of the use of reduced-order models, particularly within the field of computational aeroelasticity. The need for and the relative performance (speed, accuracy, and robustness) of reduced-order modeling strategies is documented. The development of unsteady aerodynamic state-space models directly from computational fluid dynamics analyses is presented in addition to analytical and experimental identifications of Volterra kernels. Finally, future directions for this research activity are summarized.

  9. Data Serving Climate Simulation Science at the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2011-01-01

    The NASA Center for Climate Simulation (NCCS) provides high performance computational resources, a multi-petabyte archive, and data services in support of climate simulation research and other NASA-sponsored science. This talk describes the NCCS's data-centric architecture and processing, which are evolving in anticipation of researchers' growing requirements for higher resolution simulations and increased data sharing among NCCS users and the external science community.

  10. Root Canal Transportation and Centering Ability of Nickel-Titanium Rotary Instruments in Mandibular Premolars Assessed Using Cone-Beam Computed Tomography.

    PubMed

    Mamede-Neto, Iussif; Borges, Alvaro Henrique; Guedes, Orlando Aguirre; de Oliveira, Durvalino; Pedro, Fábio Luis Miranda; Estrela, Carlos

    2017-01-01

    The aim of this study was to evaluate, using cone-beam computed tomography (CBCT), transportation and centralization of different nickel-titanium (NiTi) rotary instruments. One hundred and twenty eight mandibular premolars were selected and instrumented using the following brands of NiTi files: WaveOne, WaveOne Gold, Reciproc, ProTaper Next, ProTaper Gold, Mtwo, BioRaCe and RaCe. CBCT imaging was performed before and after root canal preparation to obtain measurements of mesial and distal dentin walls and calculations of root canal transportation and centralization. A normal distribution of data was confirmed by the Kolmogorov-Smirnov and Levene tests, and results were assessed using the Kruskal-Wallis test. Statistical significance was set at 5%. ProTaper Gold produced the lowest canal transportation values, and RaCe, the highest. ProTaper Gold files also showed the highest values for centering ability, whereas BioRaCe showed the lowest. No significant differences were found across the different instruments in terms of canal transportation and centering ability (P > 0.05). Based on the methodology employed, all instruments used for root canal preparation of mandibular premolars performed similarly with regard to canal transportation and centering ability.

  11. Validation of the Kp Geomagnetic Index Forecast at CCMC

    NASA Astrophysics Data System (ADS)

    Frechette, B. P.; Mays, M. L.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.

  12. Stanford/NASA-Ames Center of Excellence in model-based human performance

    NASA Technical Reports Server (NTRS)

    Wandell, Brian A.

    1990-01-01

    The human operator plays a critical role in many aeronautic and astronautic missions. The Stanford/NASA-Ames Center of Excellence in Model-Based Human Performance (COE) was initiated in 1985 to further our understanding of the performance capabilities and performance limits of the human component of aeronautic and astronautic projects. Support from the COE is devoted to those areas of experimental and theoretical work designed to summarize and explain human performance by developing computable performance models. The ultimate goal is to make these computable models available to other scientists for use in design and evaluation of aeronautic and astronautic instrumentation. Within vision science, two topics have received particular attention. First, researchers did extensive work analyzing the human ability to recognize object color relatively independent of the spectral power distribution of the ambient lighting (color constancy). The COE has supported a number of research papers in this area, as well as the development of a substantial data base of surface reflectance functions, ambient illumination functions, and an associated software package for rendering and analyzing image data with respect to these spectral functions. Second, the COE supported new empirical studies on the problem of selecting colors for visual display equipment to enhance human performance in discrimination and recognition tasks.

  13. Elastic Cloud Computing Infrastructures in the Open Cirrus Testbed Implemented via Eucalyptus

    NASA Astrophysics Data System (ADS)

    Baun, Christian; Kunze, Marcel

    Cloud computing realizes the advantages and overcomes some restrictionsof the grid computing paradigm. Elastic infrastructures can easily be createdand managed by cloud users. In order to accelerate the research ondata center management and cloud services the OpenCirrusTM researchtestbed has been started by HP, Intel and Yahoo!. Although commercialcloud offerings are proprietary, Open Source solutions exist in the field ofIaaS with Eucalyptus, PaaS with AppScale and at the applications layerwith Hadoop MapReduce. This paper examines the I/O performance ofcloud computing infrastructures implemented with Eucalyptus in contrastto Amazon S3.

  14. Modeling Subsurface Reactive Flows Using Leadership-Class Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Richard T; Hammond, Glenn; Lichtner, Peter

    2009-01-01

    We describe our experiences running PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  15. Implementation of radiation shielding calculation methods. Volume 1: Synopsis of methods and summary of results

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.

  16. Ammonia Oxidation by Abstraction of Three Hydrogen Atoms from a Mo–NH 3 Complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Papri; Heiden, Zachariah M.; Wiedner, Eric S.

    We report ammonia oxidation by homolytic cleavage of all three H atoms from a Mo-15NH3 complex using the 2,4,6-tri-tert-butylphenoxyl radical to afford a Mo-alkylimido (Mo=15NR) complex (R = 2,4,6-tri-t-butylcyclohexa-2,5-dien-1-one). Reductive cleavage of Mo=15NR generates a terminal Mo≡N nitride, and a [Mo-15NH]+ complex is formed by protonation. Computational analysis describes the energetic profile for the stepwise removal of three H atoms from the Mo-15NH3 complex and the formation of Mo=15NR. Acknowledgment. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Re-search Center funded by the U.S. Department of Energy (U.S. DOE), Office of Science, Officemore » of Basic Energy Sciences. EPR and mass spectrometry experiments were performed using EMSL, a national scientific user facility sponsored by the DOE’s Office of Biological and Environmental Research and located at PNNL. The authors thank Dr. Eric D. Walter and Dr. Rosalie Chu for assistance in performing EPR and mass spectroscopy analysis, respectively. Computational resources provided by the National Energy Re-search Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Pacific North-west National Laboratory is operated by Battelle for the U.S. DOE.« less

  17. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  18. The National Center for Biomedical Ontology

    PubMed Central

    Noy, Natalya F; Shah, Nigam H; Whetzel, Patricia L; Chute, Christopher G; Story, Margaret-Anne; Smith, Barry

    2011-01-01

    The National Center for Biomedical Ontology is now in its seventh year. The goals of this National Center for Biomedical Computing are to: create and maintain a repository of biomedical ontologies and terminologies; build tools and web services to enable the use of ontologies and terminologies in clinical and translational research; educate their trainees and the scientific community broadly about biomedical ontology and ontology-based technology and best practices; and collaborate with a variety of groups who develop and use ontologies and terminologies in biomedicine. The centerpiece of the National Center for Biomedical Ontology is a web-based resource known as BioPortal. BioPortal makes available for research in computationally useful forms more than 270 of the world's biomedical ontologies and terminologies, and supports a wide range of web services that enable investigators to use the ontologies to annotate and retrieve data, to generate value sets and special-purpose lexicons, and to perform advanced analytics on a wide range of biomedical data. PMID:22081220

  19. Polyphony: A Workflow Orchestration Framework for Cloud Computing

    NASA Technical Reports Server (NTRS)

    Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom

    2010-01-01

    Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.

  20. A Text Analysis of the Marine Corps Fitness Report

    DTIC Science & Technology

    2017-06-01

    difficulty in quantitatively analyzing textual. The study pulls 835 anonymous and non-attributable surveys between 2005 and 2009 from the Center for... quantitative assessments of performance. 14. SUBJECT TERMS natural language processing, fitness reports, computational linguistics, manpower 15. NUMBER...Corps provide word-picture guidance to distinguish talented Marines and promote conformity in issuing quantitative assessments of performance. vi

  1. 77 FR 44313 - 2011 Career Reserved Senior Executive Positions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-27

    ... High Performance Computing and Communications. Chief Financial Officer. Deputy Director, Acquisition... AGRICULTURE... Office of Deputy Director, Communications. Creative Development. Office of the Chief Associate... Officer. Chief Information Officer for NESDIS. Director, Space Environment Center. National Oceanic and...

  2. Mild Cognitive Impairment: What Do We Do Now?

    MedlinePlus

    ... in studies that focus on individual health, computer use and technology, family relationships and caregiving, community services, housing, and ... Reserve Officer Training Corps Navy Research Centers Science, Technology, and ... of Education School of Performing Arts College Office of the ...

  3. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  4. Distributing Data to Hand-Held Devices in a Wireless Network

    NASA Technical Reports Server (NTRS)

    Hodges, Mark; Simmons, Layne

    2008-01-01

    ADROIT is a developmental computer program for real-time distribution of complex data streams for display on Web-enabled, portable terminals held by members of an operational team of a spacecraft-command-and-control center who may be located away from the center. Examples of such terminals include personal data assistants, laptop computers, and cellular telephones. ADROIT would make it unnecessary to equip each terminal with platform- specific software for access to the data streams or with software that implements the information-sharing protocol used to deliver telemetry data to clients in the center. ADROIT is a combination of middleware plus software specific to the center. (Middleware enables one application program to communicate with another by performing such functions as conversion, translation, consolidation, and/or integration.) ADROIT translates a data stream (voice, video, or alphanumerical data) from the center into Extensible Markup Language, effectuates a subscription process to determine who gets what data when, and presents the data to each user in real time. Thus, ADROIT is expected to enable distribution of operations and to reduce the cost of operations by reducing the number of persons required to be in the center.

  5. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyonnais, Marc; Smith, Matt; Mace, Kate P.

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design andmore » deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.« less

  6. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  7. Reliability issues in active control of large flexible space structures

    NASA Technical Reports Server (NTRS)

    Vandervelde, W. E.

    1986-01-01

    Efforts in this reporting period were centered on four research tasks: design of failure detection filters for robust performance in the presence of modeling errors, design of generalized parity relations for robust performance in the presence of modeling errors, design of failure sensitive observers using the geometric system theory of Wonham, and computational techniques for evaluation of the performance of control systems with fault tolerance and redundancy management

  8. Space and Earth Sciences, Computer Systems, and Scientific Data Analysis Support, Volume 1

    NASA Technical Reports Server (NTRS)

    Estes, Ronald H. (Editor)

    1993-01-01

    This Final Progress Report covers the specific technical activities of Hughes STX Corporation for the last contract triannual period of 1 June through 30 Sep. 1993, in support of assigned task activities at Goddard Space Flight Center (GSFC). It also provides a brief summary of work throughout the contract period of performance on each active task. Technical activity is presented in Volume 1, while financial and level-of-effort data is presented in Volume 2. Technical support was provided to all Division and Laboratories of Goddard's Space Sciences and Earth Sciences Directorates. Types of support include: scientific programming, systems programming, computer management, mission planning, scientific investigation, data analysis, data processing, data base creation and maintenance, instrumentation development, and management services. Mission and instruments supported include: ROSAT, Astro-D, BBXRT, XTE, AXAF, GRO, COBE, WIND, UIT, SMM, STIS, HEIDI, DE, URAP, CRRES, Voyagers, ISEE, San Marco, LAGEOS, TOPEX/Poseidon, Pioneer-Venus, Galileo, Cassini, Nimbus-7/TOMS, Meteor-3/TOMS, FIFE, BOREAS, TRMM, AVHRR, and Landsat. Accomplishments include: development of computing programs for mission science and data analysis, supercomputer applications support, computer network support, computational upgrades for data archival and analysis centers, end-to-end management for mission data flow, scientific modeling and results in the fields of space and Earth physics, planning and design of GSFC VO DAAC and VO IMS, fabrication, assembly, and testing of mission instrumentation, and design of mission operations center.

  9. Computational Analyses of Offset Stream Nozzles for Noise Reduction

    NASA Technical Reports Server (NTRS)

    Dippold, Vance, III; Foster, Lancert; Wiese,Michael

    2007-01-01

    The Wind computational fluid dynamics code was used to perform a series of simulations on two offset stream nozzle concepts for jet noise reduction. The first concept used an S-duct to direct the secondary stream to the lower side of the nozzle. The second concept used vanes to turn the secondary flow downward. The analyses were completed in preparation of tests conducted in the NASA Glenn Research Center Aeroacoustic Propulsion Laboratory. The offset stream nozzles demonstrated good performance and reduced the amount of turbulence on the lower side of the jet plume. The computer analyses proved instrumental in guiding the development of the final test configurations and giving insight into the flow mechanics of offset stream nozzles. The computational predictions were compared with flowfield results from the jet rig testing and showed excellent agreement.

  10. Laboratory Computing Resource Center

    Science.gov Websites

    Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low

  11. EngineSim: Turbojet Engine Simulator Adapted for High School Classroom Use

    NASA Technical Reports Server (NTRS)

    Petersen, Ruth A.

    2001-01-01

    EngineSim is an interactive educational computer program that allows users to explore the effect of engine operation on total aircraft performance. The software is supported by a basic propulsion web site called the Beginner's Guide to Propulsion, which includes educator-created, web-based activities for the classroom use of EngineSim. In addition, educators can schedule videoconferencing workshops in which EngineSim's creator demonstrates the software and discusses its use in the educational setting. This software is a product of NASA Glenn Research Center's Learning Technologies Project, an educational outreach initiative within the High Performance Computing and Communications Program.

  12. Prediction and characterization of application power use in a high-performance computing environment

    DOE PAGES

    Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...

    2017-02-27

    Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.

  13. A Sensitivity Analysis of the Rigid Pavement Life-Cycle Cost Analysis Program

    DOT National Transportation Integrated Search

    2000-12-01

    Original Report Date: September 1999. This report describes the sensitivity analysis performed on the Rigid Pavement Life-Cycle Cost Analysis program, a computer program developed by the Center for Transportation Research for the Texas Department of ...

  14. Passive Localization of Multiple Sources Using Widely-Spaced Arrays With Application to Marine Mammals

    DTIC Science & Technology

    2006-09-30

    IEEE OES Student Poster Program Oceans ’05 Europe, Brest, France, June 20-23, 2005. Sponsored by Thales Underwater Systems. Student Engagement Award to E.-M. Nosal – Maui High Performance Computing Center (2005-2006).

  15. High-Performance Computing Data Center Water Usage Efficiency |

    Science.gov Websites

    cooler-an advanced dry cooler that uses refrigerant in a passive cycle to dissipate heat-was installed at efficiency-using wet cooling when it's hot and dry cooling when it's not. Learn more about NREL's partnership

  16. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  17. MSFC crack growth analysis computer program, version 2 (users manual)

    NASA Technical Reports Server (NTRS)

    Creager, M.

    1976-01-01

    An updated version of the George C. Marshall Space Flight Center Crack Growth Analysis Program is described. The updated computer program has significantly expanded capabilities over the original one. This increased capability includes an extensive expansion of the library of stress intensity factors, plotting capability, increased design iteration capability, and the capability of performing proof test logic analysis. The technical approaches used within the computer program are presented, and the input and output formats and options are described. Details of the stress intensity equations, example data, and example problems are presented.

  18. The Petascale Data Storage Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Garth; Long, Darrell; Honeyman, Peter

    2013-07-01

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.

  19. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  20. An Overview of the Iowa Flood Forecasting and Monitoring System

    NASA Astrophysics Data System (ADS)

    Krajewski, W. F.

    2016-12-01

    Following the 2008 flood that devastated eastern Iowa the state legislators established the Iowa Flood Center at the University of Iowa with the mission of translational research towards flood mitigation. The Center has adavanced several components towards this goal. In particular, the Center has developed (1) state-wide flood inundation maps based on airborne lidar-based topography data and hydraulic models; (2) a network of nearly 250 real-time ultrasonic river stage sensors; (3) a detailed rainfall-runoff model for real time streamflow forecasting; and (4) cyberinfrastructure to acquire and manage data that includes High Performance Computing and browser-based information system designed for use by general public. The author discusses these components, their operational performance and their potential to assist in development of similar nation-wide systems. Specifically, many developments taking place at the National Water Center can benefit from the Iowa system serving as a reference.

  1. Transport coefficients of dense fluids composed of globular molecules. Equilibrium molecular dynamics investigations using more-center Lennard-Jones potentials

    NASA Astrophysics Data System (ADS)

    Hoheisel, C.

    1988-09-01

    Equilibrium molecular dynamics calculations with constraints have been performed for model liquids SF6 and CF4. The computations were carried out with four- and six-center Lennard-Jones potentials and up to 2×105 integration steps. Shear, bulk viscosity and the thermal conductivity have been calculated with use of Green-Kubo relations in the formulation of ``molecule variables.'' Various thermodynamic states were investigated. For SF6, a detailed comparison with experimental data was possible. For CF4, the MD results could only be compared with experiment for one liquid state. For the latter liquid, a complementary comparison was performed using MD results obtained with a one-center Lennard-Jones potential. A limited test of the particle number dependence of the results is presented. Partial and total correlations functions are shown and discussed with respect to findings obtained for the one-center Lennard-Jones liquid.

  2. Time-Dependent Simulation of Incompressible Flow in a Turbopump Using Overset Grid Approach

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    This paper reports the progress being made towards complete unsteady turbopump simulation capability by using overset grid systems. A computational model of a turbo-pump impeller is used as a test case for the performance evaluation of the MPI, hybrid MPI/Open-MP, and MLP versions of the INS3D code. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Unsteady computations for a turbo-pump, which contains 114 zones with 34.3 Million grid points, are performed on Origin 2000 systems at NASA Ames Research Center. The approach taken for these simulations, and the performance of the parallel versions of the code are presented.

  3. Multi-party Semi-quantum Key Agreement with Delegating Quantum Computation

    NASA Astrophysics Data System (ADS)

    Liu, Wen-Jie; Chen, Zhen-Yu; Ji, Sai; Wang, Hai-Bin; Zhang, Jun

    2017-10-01

    A multi-party semi-quantum key agreement (SQKA) protocol based on delegating quantum computation (DQC) model is proposed by taking Bell states as quantum resources. In the proposed protocol, the participants only need the ability of accessing quantum channel and preparing single photons {|0〉, |1〉, |+〉, |-〉}, while the complicated quantum operations, such as the unitary operations and Bell measurement, will be delegated to the remote quantum center. Compared with previous quantum key agreement protocols, this client-server model is more feasible in the early days of the emergence of quantum computers. In order to prevent the attacks from outside eavesdroppers, inner participants and quantum center, two single photon sequences are randomly inserted into Bell states: the first sequence is used to perform the quantum channel detection, while the second is applied to disorder the positions of message qubits, which guarantees the security of the protocol.

  4. CSP: A Multifaceted Hybrid Architecture for Space Computing

    NASA Technical Reports Server (NTRS)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  5. A Full Navier-Stokes Analysis of Subsonic Diffuser of a Bifurcated 70/30 Supersonic Inlet for High Speed Civil Transport Application

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1994-01-01

    A full Navier-Stokes analysis was performed to evaluate the performance of the subsonic diffuser of a NASA Lewis Research Center 70/30 mixed-compression bifurcated supersonic inlet for high speed civil transport application. The PARC3D code was used in the present study. The computations were also performed when approximately 2.5 percent of the engine mass flow was allowed to bypass through the engine bypass doors. The computational results were compared with the available experimental data which consisted of detailed Mach number and total pressure distribution along the entire length of the subsonic diffuser. The total pressure recovery, flow distortion, and crossflow velocity at the engine face were also calculated. The computed surface ramp and cowl pressure distributions were compared with experiments. Overall, the computational results compared well with experimental data. The present CFD analysis demonstrated that the bypass flow improves the total pressure recovery and lessens flow distortions at the engine face.

  6. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  7. Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond

    2015-01-01

    The use of cloud computing resources continues to grow within the public and private sector components of the weather enterprise as users become more familiar with cloud-computing concepts, and competition among service providers continues to reduce costs and other barriers to entry. Cloud resources can also provide capabilities similar to high-performance computing environments, supporting multi-node systems required for near real-time, regional weather predictions. Referred to as "Infrastructure as a Service", or IaaS, the use of cloud-based computing hardware in an on-demand payment system allows for rapid deployment of a modeling system in environments lacking access to a large, supercomputing infrastructure. Use of IaaS capabilities to support regional weather prediction may be of particular interest to developing countries that have not yet established large supercomputing resources, but would otherwise benefit from a regional weather forecasting capability. Recently, collaborators from NASA Marshall Space Flight Center and Ames Research Center have developed a scripted, on-demand capability for launching the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS), which includes pre-compiled binaries of the latest version of the Weather Research and Forecasting (WRF) model. The WRF-EMS provides scripting for downloading appropriate initial and boundary conditions from global models, along with higher-resolution vegetation, land surface, and sea surface temperature data sets provided by the NASA Short-term Prediction Research and Transition (SPoRT) Center. This presentation will provide an overview of the modeling system capabilities and benchmarks performed on the Amazon Elastic Compute Cloud (EC2) environment. In addition, the presentation will discuss future opportunities to deploy the system in support of weather prediction in developing countries supported by NASA's SERVIR Project, which provides capacity building activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.

  8. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research alongmore » the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.« less

  9. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Patrick

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  10. On implementation of DCTCP on three-tier and fat-tree data center network topologies.

    PubMed

    Zafar, Saima; Bashir, Abeer; Chaudhry, Shafique Ahmad

    2016-01-01

    A data center is a facility for housing computational and storage systems interconnected through a communication network called data center network (DCN). Due to a tremendous growth in the computational power, storage capacity and the number of inter-connected servers, the DCN faces challenges concerning efficiency, reliability and scalability. Although transmission control protocol (TCP) is a time-tested transport protocol in the Internet, DCN challenges such as inadequate buffer space in switches and bandwidth limitations have prompted the researchers to propose techniques to improve TCP performance or design new transport protocols for DCN. Data center TCP (DCTCP) emerge as one of the most promising solutions in this domain which employs the explicit congestion notification feature of TCP to enhance the TCP congestion control algorithm. While DCTCP has been analyzed for two-tier tree-based DCN topology for traffic between servers in the same rack which is common in cloud applications, it remains oblivious to the traffic patterns common in university and private enterprise networks which traverse the complete network interconnect spanning upper tier layers. We also recognize that DCTCP performance cannot remain unaffected by the underlying DCN architecture hence there is a need to test and compare DCTCP performance when implemented over diverse DCN architectures. Some of the most notable DCN architectures are the legacy three-tier, fat-tree, BCube, DCell, VL2, and CamCube. In this research, we simulate the two switch-centric DCN architectures; the widely deployed legacy three-tier architecture and the promising fat-tree architecture using network simulator and analyze the performance of DCTCP in terms of throughput and delay for realistic traffic patterns. We also examine how DCTCP prevents incast and outcast congestion when realistic DCN traffic patterns are employed in above mentioned topologies. Our results show that the underlying DCN architecture significantly impacts DCTCP performance. We find that DCTCP gives optimal performance in fat-tree topology and is most suitable for large networks.

  11. CFD Predictions for Transonic Performance of the ERA Hybrid Wing-Body Configuration

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Luckring, James M.; McMillin, S. Naomi; Flamm, Jeffrey D.; Roman, Dino

    2016-01-01

    A computational study was performed for a Hybrid Wing Body configuration that was focused at transonic cruise performance conditions. In the absence of experimental data, two fully independent computational fluid dynamics analyses were conducted to add confidence to the estimated transonic performance predictions. The primary analysis was performed by Boeing with the structured overset-mesh code OVERFLOW. The secondary analysis was performed by NASA Langley Research Center with the unstructured-mesh code USM3D. Both analyses were performed at full-scale flight conditions and included three configurations customary to drag buildup and interference analysis: a powered complete configuration, the configuration with the nacelle/pylon removed, and the powered nacelle in isolation. The results in this paper are focused primarily on transonic performance up to cruise and through drag rise. Comparisons between the CFD results were very good despite some minor geometric differences in the two analyses.

  12. Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang

    2016-10-01

    Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.

  13. A novel method to measure femoral component migration by computed tomography: a cadaver study.

    PubMed

    Boettner, Friedrich; Sculco, Peter; Lipman, Joseph; Renner, Lisa; Faschingbauer, Martin

    2016-06-01

    Radiostereometric analysis (RSA) is the most accurate technique to measure implant migration. However, it requires special equipment, technical expertise and analysis software and has not gained wide acceptance. The current paper analyzes a novel method to measure implant migration utilizing widely available computer tomography (CT). Three uncemented total hip replacements were performed in three human cadavers and six tantalum beads were inserted into the femoral bone similar to RSA. Six different 28 mm heads (-3, 0, 2.5, 5.0, 7.5 and 10 mm) were added to simulate five reproducible translations (maximum total point migration) of the center of the head. Implant migration was measured in a 3-D analysis software (Geomagic Studio 7). Repeat manual reconstructions of the center of the head were performed by two investigators to determine repeatability and accuracy. The accuracy of measurements between the centers of two head sizes was 0.11 mm with a CI 95 % of 0.22 mm. The intra-observer repeatability was 0.13 mm (CI 95 % 0.25 mm). The interrater-reliability was 0.943. CT based measurement of head displacement in a cadaver model were highly accurate and reproducible.

  14. The Development of University Computing in Sweden 1965-1985

    NASA Astrophysics Data System (ADS)

    Dahlstrand, Ingemar

    In 1965-70 the government agency, Statskontoret, set up five university computing centers, as service bureaux financed by grants earmarked for computer use. The centers were well equipped and staffed and caused a surge in computer use. When the yearly flow of grant money stagnated at 25 million Swedish crowns, the centers had to find external income to survive and acquire time-sharing. But the charging system led to the computers not being fully used. The computer scientists lacked equipment for laboratory use. The centers were decentralized and the earmarking abolished. Eventually they got new tasks like running computers owned by the departments, and serving the university administration.

  15. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  16. Computational Characterization of Electromagnetic Field Propagation in Complex Structures

    DTIC Science & Technology

    1998-04-10

    34Computational characterization of electromagnetic field propagation in complex structures", DAAH01-91-D-ROOS D.O. 59. Dr. Michael Scalora performed the...Development, and Engineering Center, Bldg. 7804, Room 242 Redstone Arsenal, Alabama 35898-5248 USA Dr. Michael Scalora Quantum Optics Group Tel:(205...scheduled to appear. They are: (1) M. Scalora , J.P. Dowling, A.S. Manka, CM. Bowden, and J.W. Haus, Pulse Propagation Near Highly Reflective

  17. Lewis Structures Technology, 1988. Volume 1: Structural Dynamics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The specific purpose of the symposium was to familiarize the engineering structures community with the depth and range of research performed by the Structures Division of the Lewis Research Center and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive testing, dynamical systems, fatigue and damage, wind turbines, hot section technology, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics.

  18. Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys

    DTIC Science & Technology

    2012-01-01

    constants are required input to computer codes (LS-DYNA, DYNA3D or SPH ) to accurately simulate fragment impact on structural components made of high...different temperatures. These model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH ) to accurately simulate fragment impact on...ADDRESS(ES) Naval Surface Warfare Center,4104Evans Way Suite 102,Indian Head,MD,20640 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING

  19. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  20. High Performance Computing and Cutting-Edge Analysis Can Open New

    Science.gov Websites

    Realms March 1, 2018 Two people looking at a 3D interactive graphical data the Visualization Center in capabilities to visualize complex, 3D images of the wakes from multiple wind turbines so that we can better

  1. High-Performance Computing Unlocks Innovation at NREL - Video Text Version

    Science.gov Websites

    scales, data visualizations and large-scale modeling provide insights and test new ideas. But this type most energy-efficient data center in the world. NREL and Hewlett-Packard won an R&D 100 award-the

  2. Galen Maclaurin | NREL

    Science.gov Websites

    Scientific programming and high performance computing Research Interests Wind and solar resource assessment , Department of Geography and Environmental Sciences, Denver, CO Research Assistant, National Center for Atmospheric Research (NCAR), Boulder, CO Graduate Instructor and Research Assistant, University of Colorado

  3. Flow of a Gas Turbine Engine Low-Pressure Subsystem Simulated

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1997-01-01

    The NASA Lewis Research Center is managing a task to numerically simulate overnight, on a parallel computing testbed, the aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The model solves the three-dimensional Navier- Stokes flow equations through all the components within the LPS, as well as the external flow around the engine nacelle. The LPS modeling task is being performed by Allison Engine Company under the Small Engine Technology contract. The large computer simulation was evaluated on networked computer systems using 8, 16, and 32 processors, with the parallel computing efficiency reaching 75 percent when 16 processors were used.

  4. Using 3D infrared imaging to calibrate and refine computational fluid dynamic modeling for large computer and data centers

    NASA Astrophysics Data System (ADS)

    Stockton, Gregory R.

    2011-05-01

    Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.

  5. Research and educational initiatives at the Syracuse University Center for Hypersonics

    NASA Technical Reports Server (NTRS)

    Spina, E.; Lagraff, J.; Davidson, B.; Bogucz, E.; Dang, T.

    1995-01-01

    The Department of Mechanical, Aerospace, and Manufacturing Engineering and the Northeast Parallel Architectures Center of Syracuse University have been funded by NASA to establish a program to educate young engineers in the hypersonic disciplines. This goal is being achieved through a comprehensive five-year program that includes elements of undergraduate instruction, advanced graduate coursework, undergraduate research, and leading-edge hypersonics research. The research foci of the Syracuse Center for Hypersonics are three-fold; high-temperature composite materials, measurements in turbulent hypersonic flows, and the application of high-performance computing to hypersonic fluid dynamics.

  6. High-Performance Analysis of Filtered Semantic Graphs

    DTIC Science & Technology

    2012-05-06

    any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a...observation that explains why SEJITS+KDT performance is so close to CombBLAS performance in practice (as shown later in Section 7) even though its in-core...NEC, Nokia , NVIDIA, Oracle, and Samsung. This research used resources of the National Energy Research Sci- entific Computing Center, which is

  7. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  8. Future applications of artificial intelligence to Mission Control Centers

    NASA Technical Reports Server (NTRS)

    Friedland, Peter

    1991-01-01

    Future applications of artificial intelligence to Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: basic objectives of the NASA-wide AI program; inhouse research program; constraint-based scheduling; learning and performance improvement for scheduling; GEMPLAN multi-agent planner; planning, scheduling, and control; Bayesian learning; efficient learning algorithms; ICARUS (an integrated architecture for learning); design knowledge acquisition and retention; computer-integrated documentation; and some speculation on future applications.

  9. A new Fourier transform based CBIR scheme for mammographic mass classification: a preliminary invariance assessment

    NASA Astrophysics Data System (ADS)

    Gundreddy, Rohith Reddy; Tan, Maxine; Qui, Yuchen; Zheng, Bin

    2015-03-01

    The purpose of this study is to develop and test a new content-based image retrieval (CBIR) scheme that enables to achieve higher reproducibility when it is implemented in an interactive computer-aided diagnosis (CAD) system without significantly reducing lesion classification performance. This is a new Fourier transform based CBIR algorithm that determines image similarity of two regions of interest (ROI) based on the difference of average regional image pixel value distribution in two Fourier transform mapped images under comparison. A reference image database involving 227 ROIs depicting the verified soft-tissue breast lesions was used. For each testing ROI, the queried lesion center was systematically shifted from 10 to 50 pixels to simulate inter-user variation of querying suspicious lesion center when using an interactive CAD system. The lesion classification performance and reproducibility as the queried lesion center shift were assessed and compared among the three CBIR schemes based on Fourier transform, mutual information and Pearson correlation. Each CBIR scheme retrieved 10 most similar reference ROIs and computed a likelihood score of the queried ROI depicting a malignant lesion. The experimental results shown that three CBIR schemes yielded very comparable lesion classification performance as measured by the areas under ROC curves with the p-value greater than 0.498. However, the CBIR scheme using Fourier transform yielded the highest invariance to both queried lesion center shift and lesion size change. This study demonstrated the feasibility of improving robustness of the interactive CAD systems by adding a new Fourier transform based image feature to CBIR schemes.

  10. Harnessing the power of emerging petascale platforms

    NASA Astrophysics Data System (ADS)

    Mellor-Crummey, John

    2007-07-01

    As part of the US Department of Energy's Scientific Discovery through Advanced Computing (SciDAC-2) program, science teams are tackling problems that require computational simulation and modeling at the petascale. A grand challenge for computer science is to develop software technology that makes it easier to harness the power of these systems to aid scientific discovery. As part of its activities, the SciDAC-2 Center for Scalable Application Development Software (CScADS) is building open source software tools to support efficient scientific computing on the emerging leadership-class platforms. In this paper, we describe two tools for performance analysis and tuning that are being developed as part of CScADS: a tool for analyzing scalability and performance, and a tool for optimizing loop nests for better node performance. We motivate these tools by showing how they apply to S3D, a turbulent combustion code under development at Sandia National Laboratory. For S3D, our node performance analysis tool helped uncover several performance bottlenecks. Using our loop nest optimization tool, we transformed S3D's most costly loop nest to reduce execution time by a factor of 2.94 for a processor working on a 503 domain.

  11. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.

  12. EHR use and patient satisfaction: What we learned.

    PubMed

    Farber, Neil J; Liu, Lin; Chen, Yunan; Calvitti, Alan; Street, Richard L; Zuest, Danielle; Bell, Kristin; Gabuzda, Mark; Gray, Barbara; Ashfaq, Shazia; Agha, Zia

    2015-11-01

    Few studies have quantitatively examined the degree to which the use of the computer affects patients' satisfaction with the clinician and the quality of the visit. We conducted a study to examine this association. Twenty-three clinicians (21 internal medicine physicians, 2 nurse practitioners) were recruited from 4 Veteran Affairs Medical Center (VAMC) clinics located in San Diego, Calif. Five to 6 patients for most clinicians (one patient each for 2 of the clinicians) were recruited to participate in a study of patient-physician communication. The clinicians' computer use and the patient-clinician interactions in the exam room were captured in real time via video recordings of the interactions and the computer screen, and through the use of the Morae usability testing software system, which recorded clinician clicks and scrolls on the computer. After the visit, patients were asked to complete a satisfaction survey. The final sample consisted of 126 consultations. Total patient satisfaction (beta=0.014; P=.027) and patient satisfaction with patient-centered communication (beta=0.02; P=.02) were significantly associated with higher clinician “gaze time” at the patient. A higher percentage of gaze time during a visit (controlling for the length of the visit) was significantly associated with greater satisfaction with patient-centered communication (beta=0.628; P=.033). Higher clinician gaze time at the patient predicted greater patient satisfaction. This suggests that clinicians would be well served to refine their multitasking skills so that they communicate in a patient-centered manner while performing necessary computer-related tasks. These findings also have important implications for clinical training with respect to using an electronic health record (EHR) system in ways that do not impede the one-on-one conversation between clinician and patient.

  13. Gpu Implementation of a Viscous Flow Solver on Unstructured Grids

    NASA Astrophysics Data System (ADS)

    Xu, Tianhao; Chen, Long

    2016-06-01

    Graphics processing units have gained popularities in scientific computing over past several years due to their outstanding parallel computing capability. Computational fluid dynamics applications involve large amounts of calculations, therefore a latest GPU card is preferable of which the peak computing performance and memory bandwidth are much better than a contemporary high-end CPU. We herein focus on the detailed implementation of our GPU targeting Reynolds-averaged Navier-Stokes equations solver based on finite-volume method. The solver employs a vertex-centered scheme on unstructured grids for the sake of being capable of handling complex topologies. Multiple optimizations are carried out to improve the memory accessing performance and kernel utilization. Both steady and unsteady flow simulation cases are carried out using explicit Runge-Kutta scheme. The solver with GPU acceleration in this paper is demonstrated to have competitive advantages over the CPU targeting one.

  14. Faster than Real-Time Dynamic Simulation for Large-Size Power System with Detailed Dynamic Models using High-Performance Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Jin, Shuangshuang; Chen, Yousu

    This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less

  15. Final Report for DOE Award ER25756

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kesselman, Carl

    2014-11-17

    The SciDAC-funded Center for Enabling Distributed Petascale Science (CEDPS) was established to address technical challenges that arise due to the frequent geographic distribution of data producers (in particular, supercomputers and scientific instruments) and data consumers (people and computers) within the DOE laboratory system. Its goal is to produce technical innovations that meet DOE end-user needs for (a) rapid and dependable placement of large quantities of data within a distributed high-performance environment, and (b) the convenient construction of scalable science services that provide for the reliable and high-performance processing of computation and data analysis requests from many remote clients. The Centermore » is also addressing (c) the important problem of troubleshooting these and other related ultra-high-performance distributed activities from the perspective of both performance and functionality« less

  16. Side impact test and analyses of a DOT-111 tank car : final report.

    DOT National Transportation Integrated Search

    2015-10-01

    Transportation Technology Center, Inc. conducted a side impact test on a DOT-111 tank car to evaluate the performance of the : tank car under dynamic impact conditions and to provide data for the verification and refinement of a computational model. ...

  17. MONITOR THE PHOTOVOLTAIC (PV) SYSTEM ON THE NCC ROOFTOP

    EPA Science Inventory

    This study will investigate the pollution emission reduction and demand-side management potential of a
    100 kW PV system located on the roof of the National Computer Center (NCC). Standardized instrumentation to measure meteorological and PV system performance variables will b...

  18. I/O Router Placement and Fine-Grained Routing on Titan to Support Spider II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezell, Matthew A; Dillow, David; Oral, H Sarp

    2014-01-01

    The Oak Ridge Leadership Computing Facility (OLCF) introduced the concept of Fine-Grained Routing in 2008 to improve I/O performance between the Jaguar supercomputer and Spider, OLCF s center-wide Lustre file system. Fine-grained routing organizes I/O paths to minimize congestion. Jaguar has since been upgraded to Titan, providing more than a ten-fold improvement in peak performance. To support the center s increased computational capacity and I/O demand, the Spider file system has been replaced with Spider II. Building on the lessons learned from Spider, an improved method for placing LNET routers was developed and implemented for Spider II. The fine-grained routingmore » scripts and configuration have been updated to provide additional optimizations and better match the system setup. This paper presents a brief history of fine-grained routing at OLCF, an introduction to the architectures of Titan and Spider II, methods for placing routers in Titan, and details about the fine-grained routing configuration.« less

  19. Workforce Retention Study in Support of the U.S. Army Aberdeen Test Center Human Capital Management Strategy

    DTIC Science & Technology

    2016-09-01

    Sciences Group 6% 1550s Computer Scientists Group 5% Other 1500s ORSAa, Mathematics, & Statistics Group 3% 1600s Equipment & Facilities Group 4...Employee removal based on misconduct, delinquency , suitability, unsatisfactory performance, or failure to qualify for conversion to a career appointment...average of 10.4% in many areas, but over double the average for the 1550s (Computer Scientists) and other 1500s (ORSA, Mathematics, and Statistics ). Also

  20. Research in the Aloha system

    NASA Technical Reports Server (NTRS)

    Abramson, N.

    1974-01-01

    The Aloha system was studied and developed and extended to advanced forms of computer communications networks. Theoretical and simulation studies of Aloha type radio channels for use in packet switched communications networks were performed. Improved versions of the Aloha communications techniques and their extensions were tested experimentally. A packet radio repeater suitable for use with the Aloha system operational network was developed. General studies of the organization of multiprocessor systems centered on the development of the BCC 500 computer were concluded.

  1. The Architecture of a Cooperative Respondent (Dissertation Proposal)

    DTIC Science & Technology

    1989-02-01

    DISTRIBUTION/AVAILABILITY OF REPORT Approved for public release; distribution unlimited. 4. PERFORMING ORGANIZATION REPORT NUMBER(S) 5. MONITORING... ORGANIZATION REPORT NUMBER(S) 6«. NAME OF PERFORMING ORGANIZATION Center of Excellence in AI University of Pennsylvania Sb. OFFICE SYMBOL (if...applicable) 7a. NAME OF MONITORING ORGANIZATION U. S. Army Research Office 6c AODRESS (Gty, Staff, and ZIP Code) Dept. of Computer & Information

  2. Evaluation of Management Systems Performance at Navy Regional Data Automation Centers.

    DTIC Science & Technology

    1984-03-01

    REPORT NUMBER 7 . AUTHN~s) S. CONTRACT OR GRANT NUM@ER(@) Gloria Jean Cummings Scott 9. PERFORMING OROANIZATION NAME AND ADDRESS 10. PROGR3AM1 ELEMENT...120 BIBLIOGRAPHY . .. .. .. .. ... . . . . .. ... 124 I INITIAl DISTRIBUTION LIST ............... 125 7 p I,, 7 7 t I...invest- ment in information resources-- people , machines, and technologies. Unit costs of hardware continue to decline [Ref. 1]. Because computer needs

  3. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing

    PubMed Central

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P.; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique. PMID:28085932

  4. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    PubMed

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  5. Survey of Intelligent Computer-Aided Training

    NASA Technical Reports Server (NTRS)

    Loftin, R. B.; Savely, Robert T.

    1992-01-01

    Intelligent Computer-Aided Training (ICAT) systems integrate artificial intelligence and simulation technologies to deliver training for complex, procedural tasks in a distributed, workstation-based environment. Such systems embody both the knowledge of how to perform a task and how to train someone to perform that task. This paper briefly reviews the antecedents of ICAT systems and describes the approach to their creation developed at the NASA Lyndon B. Johnson Space Center. In addition to the general ICAT architecture, specific ICAT applications that have been or are currently under development are discussed. ICAT systems can offer effective solutions to a number of training problems of interest to the aerospace community.

  6. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  7. Final Report for "Implimentation and Evaluation of Multigrid Linear Solvers into Extended Magnetohydrodynamic Codes for Petascale Computing"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srinath Vadlamani; Scott Kruger; Travis Austin

    Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems.more » For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.« less

  8. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  9. Climate Data Assimilation on a Massively Parallel Supercomputer

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Ferraro, Robert D.

    1996-01-01

    We have designed and implemented a set of highly efficient and highly scalable algorithms for an unstructured computational package, the PSAS data assimilation package, as demonstrated by detailed performance analysis of systematic runs on up to 512-nodes of an Intel Paragon. The preconditioned Conjugate Gradient solver achieves a sustained 18 Gflops performance. Consequently, we achieve an unprecedented 100-fold reduction in time to solution on the Intel Paragon over a single head of a Cray C90. This not only exceeds the daily performance requirement of the Data Assimilation Office at NASA's Goddard Space Flight Center, but also makes it possible to explore much larger and challenging data assimilation problems which are unthinkable on a traditional computer platform such as the Cray C90.

  10. A Distributed Computing Network for Real-Time Systems.

    DTIC Science & Technology

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  11. Antenna pattern study, task 2

    NASA Technical Reports Server (NTRS)

    Harper, Warren

    1989-01-01

    Two electromagnetic scattering codes, NEC-BSC and ESP3, were delivered and installed on a NASA VAX computer for use by Marshall Space Flight Center antenna design personnel. The existing codes and certain supplementary software were updated, the codes installed on a computer that will be delivered to the customer, to provide capability for graphic display of the data to be computed by the use of the codes and to assist the customer in the solution of specific problems that demonstrate the use of the codes. With the exception of one code revision, all of these tasks were performed.

  12. Academic physicians' assessment of the effects of computers on health care.

    PubMed Central

    Detmer, W. M.; Friedman, C. P.

    1994-01-01

    We assessed the attitudes of academic physicians towards computers in health care at two academic medical centers that are in the early stages of clinical information-system deployment. We distributed a 4-page questionnaire to 470 subjects, and a total of 272 physicians (58%) responded. Our results show that respondents use computers frequently, primarily to perform academic-oriented tasks as opposed to clinical tasks. Overall, respondents viewed computers as being slightly beneficial to health care. They perceive self-education and access to up-to-date information as the most beneficial aspects of computers and are most concerned about privacy issues and the effect of computers on the doctor-patient relationship. Physicians with prior computer training and greater knowledge of informatics concepts had more favorable attitudes towards computers in health care. We suggest that negative attitudes towards computers can be addressed by careful system design as well as targeted educational activities. PMID:7949990

  13. A Low Cost Micro-Computer Based Local Area Network for Medical Office and Medical Center Automation

    PubMed Central

    Epstein, Mel H.; Epstein, Lynn H.; Emerson, Ron G.

    1984-01-01

    A Low Cost Micro-computer based Local Area Network for medical office automation is described which makes use of an array of multiple and different personal computers interconnected by a local area network. Each computer on the network functions as fully potent workstations for data entry and report generation. The network allows each workstation complete access to the entire database. Additionally, designated computers may serve as access ports for remote terminals. Through “Gateways” the network may serve as a front end for a large mainframe, or may interface with another network. The system provides for the medical office environment the expandability and flexibility of a multi-terminal mainframe system at a far lower cost without sacrifice of performance.

  14. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    NASA Technical Reports Server (NTRS)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).

  15. Development of a Pamphlet Targeting Computer Workstation Ergonomics

    NASA Technical Reports Server (NTRS)

    Faraci, Jennifer S.

    1997-01-01

    With the increased use of computers throughout Goddard Space Flight Center, the Industrial Hygiene Office (IHO) has observed a growing trend in the number of health complaints attributed to poor computer workstation setup. A majority of the complaints has centered around musculoskeletal symptoms, including numbness, pain, and tingling in the upper extremities, shoulders, and neck. Eye strain and headaches have also been reported. In some cases, these symptoms can lead to chronic conditions such as repetitive strain injuries (RSI's). In an effort to prevent or minimize the frequency of these symptoms among the GSFC population, the IHO conducts individual ergonomic workstation evaluations and ergonomics training classes upon request. Because of the extensive number of computer workstations at GSFC, and the limited amount of manpower which the Industrial Hygiene staff could reasonably allocate to conduct workstation evaluations and employee training, a pamphlet was developed with a two-fold purpose: (1) to educate the GSFC population about the importance of ergonomically-correct computer workstation setup and the potential effects of a poorly configured workstation; and (2) to enable employees to perform a general assessment of their own workstations and make any necessary modifications for proper setup.

  16. EC97-44347-15

    NASA Image and Video Library

    1997-12-11

    This console and its compliment of computers, monitors and commmunications equipment make up the Research Engineering Test Station, the nerve center for an aerodynamics experiment conducted by NASA's Dryden Flight Research Center, Edwards, California. The equipment was installed on a modified Lockheed L-1011 Tristar jetliner operated by Orbital Sciences Corp., of Dulles, Va., for Dryden's Adaptive Performance Optimization project. The experiment sought to improve the efficiency of long-range jetliners by using small movements of the ailerons to improve the aerodynamics of the wing at cruise conditions.

  17. Qubit-qubit interaction in quantum computers: errors and scaling laws

    NASA Astrophysics Data System (ADS)

    Gea-Banacloche, Julio R.

    1998-07-01

    This paper explores the limitations that interaction between the physical qubits making up a quantum computer may impose on the computer's performance. For computers using atoms as qubits, magnetic dipole-dipole interactions are likely to be dominant; various types of errors which they might introduce are considered here. The strength of the interaction may be reduce by increasing the distance between qubits, which in general will make the computer slower. For ion-chain based quantum computers the slowing down due to this effect is found to be generally more sever than that due to other causes. In particular, this effect alone would be enough to make these systems unacceptably slow for large-scale computation, whether they use the center of mass motion as the 'bus' or whether they do this via an optical cavity mode.

  18. Computers in aeronautics and space research at the Lewis Research Center

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This brochure presents a general discussion of the role of computers in aerospace research at NASA's Lewis Research Center (LeRC). Four particular areas of computer applications are addressed: computer modeling and simulation, computer assisted engineering, data acquisition and analysis, and computer controlled testing.

  19. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less

  20. Optimizing Radiation Doses for Computed Tomography Across Institutions: Dose Auditing and Best Practices.

    PubMed

    Demb, Joshua; Chu, Philip; Nelson, Thomas; Hall, David; Seibert, Anthony; Lamba, Ramit; Boone, John; Krishnam, Mayil; Cagnon, Christopher; Bostani, Maryam; Gould, Robert; Miglioretti, Diana; Smith-Bindman, Rebecca

    2017-06-01

    Radiation doses for computed tomography (CT) vary substantially across institutions. To assess the impact of institutional-level audit and collaborative efforts to share best practices on CT radiation doses across 5 University of California (UC) medical centers. In this before/after interventional study, we prospectively collected radiation dose metrics on all diagnostic CT examinations performed between October 1, 2013, and December 31, 2014, at 5 medical centers. Using data from January to March (baseline), we created audit reports detailing the distribution of radiation dose metrics for chest, abdomen, and head CT scans. In April, we shared reports with the medical centers and invited radiology professionals from the centers to a 1.5-day in-person meeting to review reports and share best practices. We calculated changes in mean effective dose 12 weeks before and after the audits and meeting, excluding a 12-week implementation period when medical centers could make changes. We compared proportions of examinations exceeding previously published benchmarks at baseline and following the audit and meeting, and calculated changes in proportion of examinations exceeding benchmarks. Of 158 274 diagnostic CT scans performed in the study period, 29 594 CT scans were performed in the 3 months before and 32 839 CT scans were performed 12 to 24 weeks after the audit and meeting. Reductions in mean effective dose were considerable for chest and abdomen. Mean effective dose for chest CT decreased from 13.2 to 10.7 mSv (18.9% reduction; 95% CI, 18.0%-19.8%). Reductions at individual medical centers ranged from 3.8% to 23.5%. The mean effective dose for abdominal CT decreased from 20.0 to 15.0 mSv (25.0% reduction; 95% CI, 24.3%-25.8%). Reductions at individual medical centers ranged from 10.8% to 34.7%. The number of CT scans that had an effective dose measurement that exceeded benchmarks was reduced considerably by 48% and 54% for chest and abdomen, respectively. After the audit and meeting, head CT doses varied less, although some institutions increased and some decreased mean head CT doses and the proportion above benchmarks. Reviewing institutional doses and sharing dose-optimization best practices resulted in lower radiation doses for chest and abdominal CT and more consistent doses for head CT.

  1. A proto-Data Processing Center for LISA

    NASA Astrophysics Data System (ADS)

    Cavet, Cécile; Petiteau, Antoine; Le Jeune, Maude; Plagnol, Eric; Marin-Martholaz, Etienne; Bayle, Jean-Baptiste

    2017-05-01

    The LISA project preparation requires to study and define a new data analysis framework, capable of dealing with highly heterogeneous CPU needs and of exploiting the emergent information technologies. In this context, a prototype of the mission’s Data Processing Center (DPC) has been initiated. The DPC is designed to efficiently manage computing constraints and to offer a common infrastructure where the whole collaboration can contribute to development work. Several tools such as continuous integration (CI) have already been delivered to the collaboration and are presently used for simulations and performance studies. This article presents the progress made regarding this collaborative environment and discusses also the possible next steps towards an on-demand computing infrastructure. This activity is supported by CNES as part of the French contribution to LISA.

  2. Methodical and technological aspects of creation of interactive computer learning systems

    NASA Astrophysics Data System (ADS)

    Vishtak, N. M.; Frolov, D. A.

    2017-01-01

    The article presents a methodology for the development of an interactive computer training system for training power plant. The methods used in the work are a generalization of the content of scientific and methodological sources on the use of computer-based training systems in vocational education, methods of system analysis, methods of structural and object-oriented modeling of information systems. The relevance of the development of the interactive computer training systems in the preparation of the personnel in the conditions of the educational and training centers is proved. Development stages of the computer training systems are allocated, factors of efficient use of the interactive computer training system are analysed. The algorithm of work performance at each development stage of the interactive computer training system that enables one to optimize time, financial and labor expenditure on the creation of the interactive computer training system is offered.

  3. A Framework for Debugging Geoscience Projects in a High Performance Computing Environment

    NASA Astrophysics Data System (ADS)

    Baxter, C.; Matott, L.

    2012-12-01

    High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.

  4. Turbulent Bubbly Flow in a Vertical Pipe Computed By an Eddy-Resolving Reynolds Stress Model

    DTIC Science & Technology

    2014-09-19

    the numerical code OpenFOAM R©. 1 Introduction Turbulent bubbly flows are encountered in many industrially relevant applications, such as chemical in...performed using the OpenFOAM -2.2.2 computational code utilizing a cell- center-based finite volume method on an unstructured numerical grid. The...the mean Courant number is always below 0.4. The utilized turbulence models were implemented into the so-called twoPhaseEulerFoam solver in OpenFOAM , to

  5. Sandia QIS Capabilities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Richard P.

    2017-07-01

    Sandia National Laboratories has developed a broad set of capabilities in quantum information science (QIS), including elements of quantum computing, quantum communications, and quantum sensing. The Sandia QIS program is built atop unique DOE investments at the laboratories, including the MESA microelectronics fabrication facility, the Center for Integrated Nanotechnologies (CINT) facilities (joint with LANL), the Ion Beam Laboratory, and ASC High Performance Computing (HPC) facilities. Sandia has invested $75 M of LDRD funding over 12 years to develop unique, differentiating capabilities that leverage these DOE infrastructure investments.

  6. Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Hinkley, Jeffrey A.

    2003-01-01

    The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.

  7. User-Centered Computer Aided Language Learning

    ERIC Educational Resources Information Center

    Zaphiris, Panayiotis, Ed.; Zacharia, Giorgos, Ed.

    2006-01-01

    In the field of computer aided language learning (CALL), there is a need for emphasizing the importance of the user. "User-Centered Computer Aided Language Learning" presents methodologies, strategies, and design approaches for building interfaces for a user-centered CALL environment, creating a deeper understanding of the opportunities and…

  8. CFD Modeling Activities at the NASA Stennis Space Center

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel

    2007-01-01

    A viewgraph presentation on NASA Stennis Space Center's Computational Fluid Dynamics (CFD) Modeling activities is shown. The topics include: 1) Overview of NASA Stennis Space Center; 2) Role of Computational Modeling at NASA-SSC; 3) Computational Modeling Tools and Resources; and 4) CFD Modeling Applications.

  9. Breaking with tradition: A scoping meta-analysis analyzing the effects of student-centered learning and computer-aided instruction on student performance in anatomy.

    PubMed

    Wilson, Adam B; Brown, Kirsten M; Misch, Jonathan; Miller, Corinne H; Klein, Barbie A; Taylor, Melissa A; Goodwin, Michael; Boyle, Eve K; Hoppe, Chantal; Lazarus, Michelle D

    2018-04-16

    While prior meta-analyses in anatomy education have explored the effects of laboratory pedagogies and histology media on learner performance, the effects of student-centered learning (SCL) and computer-aided instruction (CAI) have not been broadly evaluated. This research sought to answer the question, "How effective are student-centered pedagogies and CAI at increasing student knowledge gains in anatomy compared to traditional didactic approaches?" Relevant studies published within the past 51 years were searched using five databases. Predetermined eligibility criteria were applied to the screening of titles and abstracts to discern their appropriateness for study inclusion. A summary effect size was estimated to determine the effects of SCL and CAI on anatomy performance outcomes. A moderator analysis of study features was also performed. Of the 3,035 records screened, 327 underwent full-text review. Seven studies, which comprised 1,564 participants, were included in the SCL analysis. An additional 19 studies analyzed the effects of CAI in the context of 2,570 participants. Upon comparing SCL to traditional instruction, a small positive effect on learner performance was detected (standardized mean difference (SMD = 0.24; [CI = 0.07, 0.42]; P = 0.006). Likewise, students with CAI exposure moderately outscored those with limited or no access to CAI (SMD = 0.59; [CI = 0.20, 0.98]; P = 0.003). Further analysis of CAI studies identified effects (P ≤ 0.001) for learner population, publication period, interventional approach, and intervention frequency. Overall, learners exposed to SCL and supplemental CAI outperformed their more classically-trained peers as evidenced by increases in short-term knowledge gains. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.

  10. Embedded 100 Gbps Photonic Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznia, Charlie

    This innovation to fiber optic component technology increases the performance, reduces the size and reduces the power consumption of optical communications within dense network systems, such as advanced distributed computing systems and data centers. VCSEL technology is enabling short-reach (< 100 m) and >100 Gbps optical interconnections over multi-mode fiber in commercial applications.

  11. Meriden Public Library, Final Performance Report for Library Services and Construction Act (LSCA) Title VI, Library Literacy Program.

    ERIC Educational Resources Information Center

    MacCabe, Bruce

    The Literacy Learning Center Project, a project of the Meriden Public Library (Connecticut), targeted the educationally underserved and functionally illiterate, and involved recruitment, retention, space renovation, coalition building, public awareness, training, basic literacy, collection development, tutoring, computer assisted services, and…

  12. Techniques for Enhancing Web-Based Education.

    ERIC Educational Resources Information Center

    Barbieri, Kathy; Mehringer, Susan

    The Virtual Workshop is a World Wide Web-based set of modules on high performance computing developed at the Cornell Theory Center (CTC) (New York). This approach reaches a large audience, leverages staff effort, and poses challenges for developing interesting presentation techniques. This paper describes the following techniques with their…

  13. Sensory Discrimination, Generalization and Language Training of Autistic Children. Final Report.

    ERIC Educational Resources Information Center

    Blanton, Richard L.; And Others

    The report presents summaries of 11 studies performed on 25-45 autistic students in a residential center to investigate processes of discrimination and response acquisition using automated reinforcement technology and exact timing procedures. The computer operated display and recording system for language and discrimination training is described…

  14. The Practical Obstacles of Data Transfer: Why researchers still love scp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T

    The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less

  15. Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks

    DOE PAGES

    Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott; ...

    2017-08-29

    Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less

  16. Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott

    Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less

  17. Computing Protein-Protein Association Affinity with Hybrid Steered Molecular Dynamics.

    PubMed

    Rodriguez, Roberto A; Yu, Lili; Chen, Liao Y

    2015-09-08

    Computing protein-protein association affinities is one of the fundamental challenges in computational biophysics/biochemistry. The overwhelming amount of statistics in the phase space of very high dimensions cannot be sufficiently sampled even with today's high-performance computing power. In this article, we extend a potential of mean force (PMF)-based approach, the hybrid steered molecular dynamics (hSMD) approach we developed for ligand-protein binding, to protein-protein association problems. For a protein complex consisting of two protomers, P1 and P2, we choose m (≥3) segments of P1 whose m centers of mass are to be steered in a chosen direction and n (≥3) segments of P2 whose n centers of mass are to be steered in the opposite direction. The coordinates of these m + n centers constitute a phase space of 3(m + n) dimensions (3(m + n)D). All other degrees of freedom of the proteins, ligands, solvents, and solutes are freely subject to the stochastic dynamics of the all-atom model system. Conducting SMD along a line in this phase space, we obtain the 3(m + n)D PMF difference between two chosen states: one single state in the associated state ensemble and one single state in the dissociated state ensemble. This PMF difference is the first of four contributors to the protein-protein association energy. The second contributor is the 3(m + n - 1)D partial partition in the associated state accounting for the rotations and fluctuations of the (m + n - 1) centers while fixing one of the m + n centers of the P1-P2 complex. The two other contributors are the 3(m - 1)D partial partition of P1 and the 3(n - 1)D partial partition of P2 accounting for the rotations and fluctuations of their m - 1 or n - 1 centers while fixing one of the m/n centers of P1/P2 in the dissociated state. Each of these three partial partitions can be factored exactly into a 6D partial partition in multiplication with a remaining factor accounting for the small fluctuations while fixing three of the centers of P1, P2, or the P1-P2 complex, respectively. These small fluctuations can be well-approximated as Gaussian, and every 6D partition can be reduced in an exact manner to three problems of 1D sampling, counting the rotations and fluctuations around one of the centers as being fixed. We implement this hSMD approach to the Ras-RalGDS complex, choosing three centers on RalGDS and three on Ras (m = n = 3). At a computing cost of about 71.6 wall-clock hours using 400 computing cores in parallel, we obtained the association energy, -9.2 ± 1.9 kcal/mol on the basis of CHARMM 36 parameters, which well agrees with the experimental data, -8.4 ± 0.2 kcal/mol.

  18. Performance Analysis of the Unitree Central File

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Flater, David

    1994-01-01

    This report consists of two parts. The first part briefly comments on the documentation status of two major systems at NASA#s Center for Computational Sciences, specifically the Cray C98 and the Convex C3830. The second part describes the work done on improving the performance of file transfers between the Unitree Mass Storage System running on the Convex file server and the users workstations distributed over a large georgraphic area.

  19. Python in the NERSC Exascale Science Applications Program for Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack

    We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codesmore » vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices« less

  20. Exploring Midwives' Need and Intention to Adopt Electronic Integrated Antenatal Care.

    PubMed

    Markam, Hosizah; Hochheiser, Harry; Kuntoro, Kuntoro; Notobroto, Hari Basuki

    2018-01-01

    Documentation requirements for the Indonesian integrated antenatal care (ANC) program suggest the need for electronic systems to address gaps in existing paper documentation practices. Our goals were to quantify midwives' documentation completeness in a primary healthcare center, understand documentation challenges, develop a tool, and assess intention to use the tool. We analyzed existing ANC records in a primary healthcare center in Bangkalan, East Java, and conducted interviews with stakeholders to understand needs for an electronic system in support of ANC. Development of the web-based Electronic Integrated ANC (e-iANC) system used the System Development Life Cycle method. Training on the use of the system was held in the computer laboratory for 100 midwives chosen from four primary healthcare centers in each of five regions. The Unified Theory of Acceptance and Use of Technology (UTAUT) questionnaire was used to assess their intention to adopt e-iANC. The midwives' intention to adopt e-iANC was significantly influenced by performance expectancy, effort expectancy and facilitating conditions. Age, education level, and computer literacy did not significantly moderate the effects of performance expectancy and effort expectancy on adoption intention. The UTAUT results indicated that the factors that might influence intention to adopt e-iANC are potentially addressable. Results suggest that e-iANC might well be accepted by midwives.

  1. CSI Flight Computer System and experimental test results

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.; Peri, F., Jr.; Schuler, P.

    1993-01-01

    This paper describes the CSI Computer System (CCS) and the experimental tests performed to validate its functionality. This system is comprised of two major components: the space flight qualified Excitation and Damping Subsystem (EDS) which performs controls calculations; and the Remote Interface Unit (RIU) which is used for data acquisition, transmission, and filtering. The flight-like RIU is the interface between the EDS and the sensors and actuators positioned on the particular structure under control. The EDS and RIU communicate over the MIL-STD-1553B, a space flight qualified bus. To test the CCS under realistic conditions, it was connected to the Phase-0 CSI Evolutionary Model (CEM) at NASA Langley Research Center. The following schematic shows how the CCS is connected to the CEM. Various tests were performed which validated the ability of the system to perform control/structures experiments.

  2. V/STOLAND avionics system flight-test data on a UH-1H helicopter

    NASA Technical Reports Server (NTRS)

    Baker, F. A.; Jaynes, D. N.; Corliss, L. D.; Liden, S.; Merrick, R. B.; Dugan, D. C.

    1980-01-01

    The flight-acceptance test results obtained during the acceptance tests of the V/STOLAND (versatile simplex digital avionics system) digital avionics system on a Bell UH-1H helicopter in 1977 at Ames Research Center are presented. The system provides navigation, guidance, control, and display functions for NASA terminal area VTOL research programs and for the Army handling qualities research programs at Ames Research Center. The acceptance test verified system performance and contractual acceptability. The V/STOLAND hardware navigation, guidance, and control laws resident in the digital computers are described. Typical flight-test data are shown and discussed as documentation of the system performance at acceptance from the contractor.

  3. NASA Lighting Research, Test, & Analysis

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    The Habitability and Human Factors Branch, at Johnson Space Center, in Houston, TX, provides technical guidance for the development of spaceflight lighting requirements, verification of light system performance, analysis of integrated environmental lighting systems, and research of lighting-related human performance issues. The Habitability & Human Factors Lighting Team maintains two physical facilities that are integrated to provide support. The Lighting Environment Test Facility (LETF) provides a controlled darkroom environment for physical verification of lighting systems with photometric and spetrographic measurement systems. The Graphics Research & Analysis Facility (GRAF) maintains the capability for computer-based analysis of operational lighting environments. The combined capabilities of the Lighting Team at Johnson Space Center have been used for a wide range of lighting-related issues.

  4. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.

  5. Mathematics and Computer Science | Argonne National Laboratory

    Science.gov Websites

    Genomics and Systems Biology LCRCLaboratory Computing Resource Center MCSGMidwest Center for Structural Genomics NAISENorthwestern-Argonne Institute of Science & Engineering SBCStructural Biology Center

  6. Computer Assisted Multi-Center Creation of Medical Knowledge Bases

    PubMed Central

    Giuse, Nunzia Bettinsoli; Giuse, Dario A.; Miller, Randolph A.

    1988-01-01

    Computer programs which support different aspects of medical care have been developed in recent years. Their capabilities range from diagnosis to medical imaging, and include hospital management systems and therapy prescription. In spite of their diversity these systems have one commonality: their reliance on a large body of medical knowledge in computer-readable form. This knowledge enables such programs to draw inferences, validate hypotheses, and in general to perform their intended task. As has been clear to developers of such systems, however, the creation and maintenance of medical knowledge bases are very expensive. Practical and economical difficulties encountered during this long-term process have discouraged most attempts. This paper discusses knowledge base creation and maintenance, with special emphasis on medical applications. We first describe the methods currently used and their limitations. We then present our recent work on developing tools and methodologies which will assist in the process of creating a medical knowledge base. We focus, in particular, on the possibility of multi-center creation of the knowledge base.

  7. MC3, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cawkwell, Marc Jon

    2016-09-09

    The MC3 code is used to perform Monte Carlo simulations in the isothermal-isobaric ensemble (constant number of particles, temperature, and pressure) on molecular crystals. The molecules within the periodic simulation cell are treated as rigid bodies, alleviating the requirement for a complex interatomic potential. Intermolecular interactions are described using generic, atom-centered pair potentials whose parameterization is taken from the literature [D. E. Williams, J. Comput. Chem., 22, 1154 (2001)] and electrostatic interactions arising from atom-centered, fixed, point partial charges. The primary uses of the MC3 code are the computation of i) the temperature and pressure dependence of lattice parameters andmore » thermal expansion coefficients, ii) tensors of elastic constants and compliances via the Parrinello and Rahman’s fluctuation formula [M. Parrinello and A. Rahman, J. Chem. Phys., 76, 2662 (1982)], and iii) the investigation of polymorphic phase transformations. The MC3 code is written in Fortran90 and requires LAPACK and BLAS linear algebra libraries to be linked during compilation. Computationally expensive loops are accelerated using OpenMP.« less

  8. The Data Acquisition and Control Systems of the Jet Noise Laboratory at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Jansen, B. J., Jr.

    1998-01-01

    The features of the data acquisition and control systems of the NASA Langley Research Center's Jet Noise Laboratory are presented. The Jet Noise Laboratory is a facility that simulates realistic mixed flow turbofan jet engine nozzle exhaust systems in simulated flight. The system is capable of acquiring data for a complete take-off assessment of noise and nozzle performance. This paper describes the development of an integrated system to control and measure the behavior of model jet nozzles featuring dual independent high pressure combusting air streams with wind tunnel flow. The acquisition and control system is capable of simultaneous measurement of forces, moments, static and dynamic model pressures and temperatures, and jet noise. The design concepts for the coordination of the control computers and multiple data acquisition computers and instruments are discussed. The control system design and implementation are explained, describing the features, equipment, and the experiences of using a primarily Personal Computer based system. Areas for future development are examined.

  9. Computer Center Harris 1600 Operator’s Guide.

    DTIC Science & Technology

    1982-06-01

    RECIPIENT’S CATALOG NUMBER CMLD-82-15 Vb /9 7 ’ 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Computer Center Harris 1600 Operator’s Guide...AD-AIAA 077 DAVID W TAYLOR NAVAL SHIP RESEARCH AND DEVELOPMENT CE--ETC F/G. 5/9 COMPUTER CENTER HARRIS 1600 OPEAATOR’S GUIDE.dU) M JUN 62 D A SOMMER...20084 COMPUTER CENTER HARRIS 1600 OPERATOR’s GUIDE by David V. Sommer & Sharon E. Good APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIMITED ’-.7 SJ0 o 0

  10. Elastic-plastic mixed-iterative finite element analysis: Implementation and performance assessment

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1993-01-01

    An elastic-plastic algorithm based on Von Mises and associative flow criteria is implemented in MHOST-a mixed iterative finite element analysis computer program developed by NASA Lewis Research Center. The performance of the resulting elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors of 4-node quadrilateral shell finite elements are tested for elastic-plastic performance. Generally, the membrane results are excellent, indicating the implementation of elastic-plastic mixed-iterative analysis is appropriate.

  11. Architecture, Design, and System; Performance Assessment and Development Methodology for Computer-Based Systems. Volume 1. Methodology Description, Discussion, and Assessment,

    DTIC Science & Technology

    1983-12-30

    AD-Ri46 57? ARCHITECTURE DESIGN AND SYSTEM; PERFORMANCE ASSESSMENT i/i AND DEVELOPMENT ME..(U) NAVAL SURFACE WEAPONS CENTER SILYER SPRING MD J...AD-A 146 577 NSIWC TR 83-324 ARCHITECTURE , DESIGN , AND SYSTEM; PERFORMANCE ASSESSMENT AND DEVELOPMENT METHODOLOGY...REPORT NUMBER 12. GOVT ACCESSION NO.3. RECIPIENT’S CATALOG NUMBER NSWC TR 83-324 10- 1 1 51’ 4. ?ITLE (and subtitle) ARCHITECTURE , DESIGN , AND SYSTEM; S

  12. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  13. Changing the batch system in a Tier 1 computing center: why and how

    NASA Astrophysics Data System (ADS)

    Chierici, Andrea; Dal Pra, Stefano

    2014-06-01

    At the Italian Tierl Center at CNAF we are evaluating the possibility to change the current production batch system. This activity is motivated mainly because we are looking for a more flexible licensing model as well as to avoid vendor lock-in. We performed a technology tracking exercise and among many possible solutions we chose to evaluate Grid Engine as an alternative because its adoption is increasing in the HEPiX community and because it's supported by the EMI middleware that we currently use on our computing farm. Another INFN site evaluated Slurm and we will compare our results in order to understand pros and cons of the two solutions. We will present the results of our evaluation of Grid Engine, in order to understand if it can fit the requirements of a Tier 1 center, compared to the solution we adopted long ago. We performed a survey and a critical re-evaluation of our farming infrastructure: many production softwares (accounting and monitoring on top of all) rely on our current solution and changing it required us to write new wrappers and adapt the infrastructure to the new system. We believe the results of this investigation can be very useful to other Tier-ls and Tier-2s centers in a similar situation, where the effort of switching may appear too hard to stand. We will provide guidelines in order to understand how difficult this operation can be and how long the change may take.

  14. Correction of Gradient Nonlinearity Bias in Quantitative Diffusion Parameters of Renal Tissue with Intra Voxel Incoherent Motion.

    PubMed

    Malyarenko, Dariya I; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K; Ross, Brian D; Chenevert, Thomas L

    2015-12-01

    Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b -maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction.

  15. Correction of Gradient Nonlinearity Bias in Quantitative Diffusion Parameters of Renal Tissue with Intra Voxel Incoherent Motion

    PubMed Central

    Malyarenko, Dariya I.; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K.; Ross, Brian D.; Chenevert, Thomas L.

    2015-01-01

    Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b-maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction. PMID:26811845

  16. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascalemore » computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.« less

  17. Experimental Realization of High-Efficiency Counterfactual Computation.

    PubMed

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-21

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  18. Experimental Realization of High-Efficiency Counterfactual Computation

    NASA Astrophysics Data System (ADS)

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-01

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  19. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  20. Online System for Faster Multipoint Linkage Analysis via Parallel Execution on Thousands of Personal Computers

    PubMed Central

    Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.

    2006-01-01

    Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644

  1. Aircraft integrated design and analysis: A classroom experience

    NASA Technical Reports Server (NTRS)

    1988-01-01

    AAE 451 is the capstone course required of all senior undergraduates in the School of Aeronautics and Astronautics at Purdue University. During the past year the first steps of a long evolutionary process were taken to change the content and expectations of this course. These changes are the result of the availability of advanced computational capabilities and sophisticated electronic media availability at Purdue. This presentation will describe both the long range objectives and this year's experience using the High Speed Commercial Transport (HSCT) design, the AIAA Long Duration Aircraft design and a Remotely Piloted Vehicle (RPV) design proposal as project objectives. The central goal of these efforts was to provide a user-friendly, computer-software-based, environment to supplement traditional design course methodology. The Purdue University Computer Center (PUCC), the Engineering Computer Network (ECN), and stand-alone PC's were used for this development. This year's accomplishments centered primarily on aerodynamics software obtained from the NASA Langley Research Center and its integration into the classroom. Word processor capability for oral and written work and computer graphics were also blended into the course. A total of 10 HSCT designs were generated, ranging from twin-fuselage and forward-swept wing aircraft, to the more traditional delta and double-delta wing aircraft. Four Long Duration Aircraft designs were submitted, together with one RPV design tailored for photographic surveillance. Supporting these activities were three video satellite lectures beamed from NASA/Langley to Purdue. These lectures covered diverse areas such as an overview of HSCT design, supersonic-aircraft stability and control, and optimization of aircraft performance. Plans for next year's effort will be reviewed, including dedicated computer workstation utilization, remote satellite lectures, and university/industrial cooperative efforts.

  2. Parallel Computational Fluid Dynamics: Current Status and Future Requirements

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; VanDalsem, William R.; Dagum, Leonardo; Kutler, Paul (Technical Monitor)

    1994-01-01

    One or the key objectives of the Applied Research Branch in the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Allies Research Center is the accelerated introduction of highly parallel machines into a full operational environment. In this report we discuss the performance results obtained from the implementation of some computational fluid dynamics (CFD) applications on the Connection Machine CM-2 and the Intel iPSC/860. We summarize some of the experiences made so far with the parallel testbed machines at the NAS Applied Research Branch. Then we discuss the long term computational requirements for accomplishing some of the grand challenge problems in computational aerosciences. We argue that only massively parallel machines will be able to meet these grand challenge requirements, and we outline the computer science and algorithm research challenges ahead.

  3. CTAS: Computer intelligence for air traffic control in the terminal area

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1992-01-01

    A system for the automated management and control of arrival traffic, referred to as the Center-TRACON Automation System (CTAS), has been designed by the ATC research group at NASA Ames research center. In a cooperative program, NASA and the FAA have efforts underway to install and evaluate the system at the Denver and Dallas/Ft. Worth airports. CTAS consists of three types of integrated tools that provide computer-generated intelligence for both Center and TRACON controllers to guide them in managing and controlling arrival traffic efficiently. One tool, the Traffic Management Advisor (TMA), establishes optimized landing sequences and landing times for aircraft arriving in the center airspace several hundred miles from the airport. In TRACON, TMA frequencies missed approach aircraft and unanticipated arrivals. Another tool, the Descent Advisor (DA), generates clearances for the center controllers handling at crossing times provided by TMA. In the TRACON, the final approach spacing tool (FAST) provides heading and speed clearances that produce and accurately spaced flow of aircraft on the final approach course. A data base consisting of aircraft performance models, airline preferred operational procedures and real time wind measurements contribute to the effective operation of CTAS. Extensive simulator evaluations of CTAS have demonstrated controller acceptance, delay reductions, and fuel savings.

  4. Super Cooled Large Droplet Analysis of Several Geometries Using LEWICE3D Version 3

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.

    2011-01-01

    Super Cooled Large Droplet (SLD) collection efficiency calculations were performed for several geometries using the LEWICE3D Version 3 software. The computations were performed using the NASA Glenn Research Center SLD splashing model which has been incorporated into the LEWICE3D Version 3 software. Comparisons to experiment were made where available. The geometries included two straight wings, a swept 64A008 wing tip, two high lift geometries, and the generic commercial transport DLR-F4 wing body configuration. In general the LEWICE3D Version 3 computations compared well with the 2D LEWICE 3.2.2 results and with experimental data where available.

  5. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less

  6. 77 FR 34941 - Privacy Act of 1974; Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-12

    ...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, DoD. ACTION: Notice of a... computer matching program are the Department of Veterans Affairs (VA) and the Defense Manpower Data Center... identified as DMDC 01, entitled ``Defense Manpower Data Center Data Base,'' last published in the Federal...

  7. 77 FR 35432 - Privacy Act of 1974, Computer Matching Program: United States Postal Service and the Defense...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-13

    ... the Defense Manpower Data Center, Department of Defense AGENCY: Postal Service TM . ACTION: Notice of Computer Matching Program--United States Postal Service and the Defense Manpower Data Center, Department of... as the recipient agency in a computer matching program with the Defense Manpower Data Center (DMDC...

  8. Lattice dynamics calculations based on density-functional perturbation theory in real space

    NASA Astrophysics Data System (ADS)

    Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias

    2017-06-01

    A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.

  9. Chemistry Research

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Philip Morris research center scientists use a computer program called CECTRP, for Chemical Equilibrium Composition and Transport Properties, to gain insight into the behavior of atoms as they progress along the reaction pathway. Use of the program lets the scientist accurately predict the behavior of a given molecule or group of molecules. Computer generated data must be checked by laboratory experiment, but the use of CECTRP saves the researchers hundreds of hours of laboratory time since experiments must run only to validate the computer's prediction. Philip Morris estimates that had CECTRP not been available, at least two man years would have been required to develop a program to perform similar free energy calculations.

  10. NASA-OAI HPCCP K-12 Program

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The NASA-OAI High Performance Communication and Computing K- 12 School Partnership program has been completed. Cleveland School of the Arts, Empire Computech Center, Grafton Local Schools and the Bug O Nay Ge Shig School have all received network equipment and connections. Each school is working toward integrating computer and communications technology into their classroom curriculum. Cleveland School of the Arts students are creating computer software. Empire Computech Center is a magnet school for technology education at the elementary school level. Grafton Local schools is located in a rural community and is using communications technology to bring to their students some of the same benefits students from suburban and urban areas receive. The Bug O Nay Ge Shig School is located on an Indian Reservation in Cass Lake, MN. The students at this school are using the computer to help them with geological studies. A grant has been issued to the friends of the Nashville Library. Nashville is a small township in Holmes County, Ohio. A community organization has been formed to turn their library into a state of the art Media Center. Their goal is to have a place where rural students can learn about different career options and how to go about pursuing those careers. Taylor High School in Cincinnati, Ohio was added to the schools involved in the Wind Tunnel Project. A mini grant has been awarded to Taylor High School for computer equipment. The computer equipment is utilized in the school's geometry class to computationally design objects which will be tested for their aerodynamic properties in the Barberton Wind Tunnel. The students who create the models can view the test in the wind tunnel via desk top conferencing. Two teachers received stipends for helping with the Regional Summer Computer Workshop. Both teachers were brought in to teach a session within the workshop. They were selected to teach the session based on their expertise in particular software applications.

  11. Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 5: Study analysis report

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex (PTC) at the Marshall Space Flight Center (MSFC). The PTC will train the space station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be on-board the Freedom Space Station. The further analysis performed on the SCS study as part of task 2-Perform Studies and Parametric Analysis-of the SCS study contract is summarized. These analyses were performed to resolve open issues remaining after the completion of task 1, and the publishing of the SCS study issues report. The results of these studies provide inputs into SCS task 3-Develop and present SCS requirements, and SCS task 4-develop SCS conceptual designs. The purpose of these studies is to resolve the issues into usable requirements given the best available information at the time of the study. A list of all the SCS study issues is given.

  12. Implementation of a High-Speed FPGA and DSP Based FFT Processor for Improving Strain Demodulation Performance in a Fiber-Optic-Based Sensing System

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2005-01-01

    NASA's Aviation Safety and Security Program is pursuing research in on-board Structural Health Management (SHM) technologies for purposes of reducing or eliminating aircraft accidents due to system and component failures. Under this program, NASA Langley Research Center (LaRC) is developing a strain-based structural health-monitoring concept that incorporates a fiber optic-based measuring system for acquiring strain values. This fiber optic-based measuring system provides for the distribution of thousands of strain sensors embedded in a network of fiber optic cables. The resolution of strain value at each discrete sensor point requires a computationally demanding data reduction software process that, when hosted on a conventional processor, is not suitable for near real-time measurement. This report describes the development and integration of an alternative computing environment using dedicated computing hardware for performing the data reduction. Performance comparison between the existing and the hardware-based system is presented.

  13. Experimental Evaluation and Workload Characterization for High-Performance Computer Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.

    1995-01-01

    This research is conducted in the context of the Joint NSF/NASA Initiative on Evaluation (JNNIE). JNNIE is an inter-agency research program that goes beyond typical.bencbking to provide and in-depth evaluations and understanding of the factors that limit the scalability of high-performance computing systems. Many NSF and NASA centers have participated in the effort. Our research effort was an integral part of implementing JNNIE in the NASA ESS grand challenge applications context. Our research work under this program was composed of three distinct, but related activities. They include the evaluation of NASA ESS high- performance computing testbeds using the wavelet decomposition application; evaluation of NASA ESS testbeds using astrophysical simulation applications; and developing an experimental model for workload characterization for understanding workload requirements. In this report, we provide a summary of findings that covers all three parts, a list of the publications that resulted from this effort, and three appendices with the details of each of the studies using a key publication developed under the respective work.

  14. Data Center Consolidation: A Step towards Infrastructure Clouds

    NASA Astrophysics Data System (ADS)

    Winter, Markus

    Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.

  15. Modeling Ni-Cd performance. Planned alterations to the Goddard battery model

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1986-01-01

    The Goddard Space Flight Center (GSFC) currently has a preliminary computer model to simulate a Nickel Cadmium (Ni-Cd) performance. The basic methodology of the model was described in the paper entitled Fundamental Algorithms of the Goddard Battery Model. At present, the model is undergoing alterations to increase its efficiency, accuracy, and generality. A review of the present battery model is given, and the planned charges of the model are described.

  16. Methods and Design: Measuring Recognition Performance Using Computer- Based and Paper-Based Methods.

    DTIC Science & Technology

    1991-01-01

    FEDERICO Navy Personnel Research and Development Center, San Diego, California Using a within-subjects design , we administered to 83 naval pilots and...blank. The students were encouraged to go through the the research subjects into two groups according to whether 344 FEDERICO or not their performance...E Dt0WARDS, A L. (1964). Expermental design in psycholo ical research psychological assessment of elernentary-school-age children Contem- New York

  17. Goal Orientation Framing and Its Influence on Performance

    DTIC Science & Technology

    2012-12-01

    first-person shooter computer games Call of Duty: Modern Warfare 2 and Call of Duty: Modern Warfare 3. During the simulation, participants were...working understanding of social expectations and norms (Duda & Nicholis, 1992). It is true that obese people, drug addicts and abusive parents exist in...Monterey Student Activity Center. B. MEASURES Performance was assessed in two tests, a math test and a first-person shooter game . It was the intent of

  18. Application for temperature and humidity monitoring of data center environment

    NASA Astrophysics Data System (ADS)

    Albert, Ş.; Truşcǎ, M. R. C.; Soran, M. L.

    2015-12-01

    The technology and computer science registered a large development in the last years. Most systems that use high technologies require special working conditions. The monitoring and the controlling are very important. The temperature and the humidity are important parameters in the operation of computer systems, industrial and research, maintaining it between certain values to ensure their proper functioning being important. Usually, the temperature is maintained in the established range using an air conditioning system, but the humidity is affected. In the present work we developed an application based on a board with own firmware called "AVR_NET_IO" using a microcontroller ATmega32 type for temperature and humidity monitoring in Data Center of INCDTIM. On this board, temperature sensors were connected to measure the temperature in different points of the Data Center and outside of this. Humidity monitoring is performed using data from integrated sensors of the air conditioning system, thus achieving a correlation between humidity and temperature variation. It was developed a software application (CM-1) together with the hardware, which allows temperature monitoring and register inside Data Center and trigger an alarm when variations are greater with 3°C than established limits of the temperature.

  19. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  20. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  1. Application of Computational Stability and Control Techniques Including Unsteady Aerodynamics and Aeroelastic Effects

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Edwards, John W.

    2004-01-01

    The motivation behind the inclusion of unsteady aerodynamics and aeroelastic effects in the computation of stability and control (S&C) derivatives will be discussed as they pertain to aeroelastic and aeroservoelastic analysis. This topic will be addressed in the context of two applications, the first being the estimation of S&C derivatives for a cable-mounted aeroservoelastic wind tunnel model tested in the NASA Langley Research Center (LaRC) Transonic Dynamics Tunnel (TDT). The second application will be the prediction of the nonlinear aeroservoelastic phenomenon known as Residual Pitch Oscillation (RPO) on the B-2 Bomber. Techniques and strategies used in these applications to compute S&C derivatives and perform flight simulations will be reviewed, and computational results will be presented.

  2. NASA Rotor 37 CFD Code Validation: Glenn-HT Code

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2010-01-01

    In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.

  3. Provable classically intractable sampling with measurement-based computation in constant time

    NASA Astrophysics Data System (ADS)

    Sanders, Stephen; Miller, Jacob; Miyake, Akimasa

    We present a constant-time measurement-based quantum computation (MQC) protocol to perform a classically intractable sampling problem. We sample from the output probability distribution of a subclass of the instantaneous quantum polynomial time circuits introduced by Bremner, Montanaro and Shepherd. In contrast with the usual circuit model, our MQC implementation includes additional randomness due to byproduct operators associated with the computation. Despite this additional randomness we show that our sampling task cannot be efficiently simulated by a classical computer. We extend previous results to verify the quantum supremacy of our sampling protocol efficiently using only single-qubit Pauli measurements. Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131, USA.

  4. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 2 quarter 1 progress report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Bojanowski, C.; Shen, J.

    2012-04-09

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to improve design allowing for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFHRC wind engineering laboratory. This quarterly report documents technical progress on the project tasks for the period of October through December 2011.« less

  5. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 2 quarter 2 progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Bojanowski, C.; Shen, J.

    2012-06-28

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to improve design allowing for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFHRC wind engineering laboratory. This quarterly report documents technical progress on the project tasks for the period of January through March 2012.« less

  6. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 1 quarter 3 progress report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Kulak, R.F.; Bojanowski, C.

    2011-08-26

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of April through June 2011.« less

  7. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  8. Computer Maintenance Operations Center (CMOC), additional computer support equipment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  9. Development of Polarized UV Raman and Infrared Emission/Absorption Spectroscopy for Rocket Engine Applications

    NASA Technical Reports Server (NTRS)

    Osborne, Robin; Wehrmeyer, Joseph; Farmer, Richard; Trinh, Huu; Dobson, Chris; Eskridge, Richard; Cramer, John; Hartfield, Roy; Turner, Jim (Technical Monitor)

    2001-01-01

    The objective of this project is to provide measurements of species concentrations and temperature for hot-fire test articles at Test Stand 115 at NASA Marshall Space Flight Center. Measurements can be useful for comparison to computational fluid dynamics simulations and help to evaluate combustion performance.

  10. Low Speed Rot or/Fuselage Interactional Aerodynamics

    NASA Technical Reports Server (NTRS)

    Barnwell, Richard W.; Prichard, Devon S.

    2003-01-01

    This report presents work performed under a Cooperative Research Agreement between Virginia Tech and the NASA Langley Research Center. The work involved development of computational techniques for modeling helicopter rotor/airframe aerodynamic interaction. A brief overview of the problem is presented, the modeling techniques are described, and selected example calculations are briefly discussed.

  11. Low-Cost Terminal Alternative for Learning Center Managers. Final Report.

    ERIC Educational Resources Information Center

    Nix, C. Jerome; And Others

    This study established the feasibility of replacing high performance and relatively expensive computer terminals with less expensive ones adequate for supporting specific tasks of Advanced Instructional System (AIS) at Lowry AFB, Colorado. Surveys of user requirements and available devices were conducted and the results used in a system analysis.…

  12. Wayne Township Public Library, Final Performance Report for Library Services and Construction Act (LSCA) Title VI, Library Literacy Program.

    ERIC Educational Resources Information Center

    Smyth, Carol B.; Grannell, Dorothy S.; Moore, Miriam

    The Literacy Resource Center project, a program of the Wayne Township Public Library also known as the Morrisson-Reeves Library (Richmond, Indiana), involved recruitment, retention, coalition building, public awareness, training, basic literacy, collection development, tutoring, computer-assisted, other technology, employment oriented,…

  13. Army Maneuver Center of Excellence

    DTIC Science & Technology

    2012-10-18

    agreements throughout DoD DARPA, JIEDDO, DHS, FAA, DoE, NSA , NASA, SMDC, etc. Strategic Partnerships Benefit the Army Materiel Enterprise External... Neuroscience Network Sciences Hierarchical Computing Extreme Energy Science Autonomous Systems Technology Emerging Sciences Meso-scale (grain...scales • Improvements in Soldier-system overall performance → operational neuroscience and advanced simulation and training technologies

  14. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  15. Fiducial migration following small peripheral lung tumor image-guided CyberKnife stereotactic radiosurgery

    NASA Astrophysics Data System (ADS)

    Strulik, Konrad L.; Cho, Min H.; Collins, Brian T.; Khan, Noureen; Banovac, Filip; Slack, Rebecca; Cleary, Kevin

    2008-03-01

    To track respiratory motion during CyberKnife stereotactic radiosurgery in the lung, several (three to five) cylindrical gold fiducials are implanted near the planned target volume (PTV). Since these fiducials remain in the human body after treatment, we hypothesize that tracking fiducial movement over time may correlate with the tumor response to treatment and pulmonary fibrosis, thereby serving as an indicator of treatment success. In this paper, we investigate fiducial migration in 24 patients through examination of computed tomography (CT) volume images at four time points: pre-treatment, three, six, and twelve month post-treatment. We developed a MATLAB based GUI environment to display the images, identify the fiducials, and compute our performance measure. After we semi-automatically segmented and detected fiducial locations in CT images of the same patient over time, we identified them according to their configuration and introduced a relative performance measure (ACD: average center distance) to detect their migration. We found that the migration tended to result in a movement towards the fiducial center of the radiated tissue area (indicating tumor regression) and may potentially be linked to the patient prognosis.

  16. Exploring the Relationships between Self-Efficacy and Preference for Teacher Authority among Computer Science Majors

    ERIC Educational Resources Information Center

    Lin, Che-Li; Liang, Jyh-Chong; Su, Yi-Ching; Tsai, Chin-Chung

    2013-01-01

    Teacher-centered instruction has been widely adopted in college computer science classrooms and has some benefits in training computer science undergraduates. Meanwhile, student-centered contexts have been advocated to promote computer science education. How computer science learners respond to or prefer the two types of teacher authority,…

  17. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-11-01

    The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.

  18. Simulating Hydrologic Flow and Reactive Transport with PFLOTRAN and PETSc on Emerging Fine-Grained Parallel Computer Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Rupp, K.; Smith, B. F.; Brown, J.; Knepley, M.; Zhang, H.; Adams, M.; Hammond, G. E.

    2017-12-01

    As the high-performance computing community pushes towards the exascale horizon, power and heat considerations have driven the increasing importance and prevalence of fine-grained parallelism in new computer architectures. High-performance computing centers have become increasingly reliant on GPGPU accelerators and "manycore" processors such as the Intel Xeon Phi line, and 512-bit SIMD registers have even been introduced in the latest generation of Intel's mainstream Xeon server processors. The high degree of fine-grained parallelism and more complicated memory hierarchy considerations of such "manycore" processors present several challenges to existing scientific software. Here, we consider how the massively parallel, open-source hydrologic flow and reactive transport code PFLOTRAN - and the underlying Portable, Extensible Toolkit for Scientific Computation (PETSc) library on which it is built - can best take advantage of such architectures. We will discuss some key features of these novel architectures and our code optimizations and algorithmic developments targeted at them, and present experiences drawn from working with a wide range of PFLOTRAN benchmark problems on these architectures.

  19. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  20. Effect of Graphene with Nanopores on Metal Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Hu; Chen, Xianlang; Wang, Lei

    Porous graphene, which is a novel type of defective graphene, shows excellent potential as a support material for metal clusters. In this work, the stability and electronic structures of metal clusters (Pd, Ir, Rh) supported on pristine graphene and graphene with different sizes of nanopore were investigated by first-principle density functional theory (DFT) calculations. Thereafter, CO adsorption and oxidation reaction on the Pd-graphene system were chosen to evaluate its catalytic performance. Graphene with nanopore can strongly stabilize the metal clusters and cause a substantial downshift of the d-band center of the metal clusters, thus decreasing CO adsorption. All binding energies,more » d-band centers, and adsorption energies show a linear change with the size of the nanopore: a bigger size of nanopore corresponds to a stronger metal clusters bond to the graphene, lower downshift of the d-band center, and weaker CO adsorption. By using a suitable size nanopore, supported Pd clusters on the graphene will have similar CO and O2 adsorption ability, thus leading to superior CO tolerance. The DFT calculated reaction energy barriers show that graphene with nanopore is a superior catalyst for CO oxidation reaction. These properties can play an important role in instructing graphene-supported metal catalyst preparation to prevent the diffusion or agglomeration of metal clusters and enhance catalytic performance. This work was supported by National Basic Research Program of China (973Program) (2013CB733501), the National Natural Science Foundation of China (NSFC-21176221, 21136001, 21101137, 21306169, and 91334013). D. Mei acknowledges the support from the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. Computing time was granted by the grand challenge of computational catalysis of the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and by the National Energy Research Scientific Computing Center (NERSC).« less

  1. Lockheed L-1011 Test Station on-board in support of the Adaptive Performance Optimization flight res

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This console and its compliment of computers, monitors and commmunications equipment make up the Research Engineering Test Station, the nerve center for a new aerodynamics experiment being conducted by NASA's Dryden Flight Research Center, Edwards, California. The equipment is installed on a modified Lockheed L-1011 Tristar jetliner operated by Orbital Sciences Corp., of Dulles, Va., for Dryden's Adaptive Performance Optimization project. The experiment seeks to improve the efficiency of long-range jetliners by using small movements of the ailerons to improve the aerodynamics of the wing at cruise conditions. About a dozen research flights in the Adaptive Performance Optimization project are planned over the next two to three years. Improving the aerodynamic efficiency should result in equivalent reductions in fuel usage and costs for airlines operating large, wide-bodied jetliners.

  2. Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170174 computers ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), showing duplexed cyber 170-174 computers - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  3. Control of coherence among the spins of a single electron and the three nearest neighbor {sup 13}C nuclei of a nitrogen-vacancy center in diamond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shimo-Oka, T.; Miwa, S.; Suzuki, Y.

    2015-04-13

    Individual nuclear spins in diamond can be optically detected through hyperfine couplings with the electron spin of a single nitrogen-vacancy (NV) center; such nuclear spins have outstandingly long coherence times. Among the hyperfine couplings in the NV center, the nearest neighbor {sup 13}C nuclear spins have the largest coupling strength. Nearest neighbor {sup 13}C nuclear spins have the potential to perform fastest gate operations, providing highest fidelity in quantum computing. Herein, we report on the control of coherences in the NV center where all three nearest neighbor carbons are of the {sup 13}C isotope. Coherence among the three and fourmore » qubits are generated and analyzed at room temperature.« less

  4. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  5. Center for Computing Research Summer Research Proceedings 2015.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, Andrew Michael; Parks, Michael L.

    2015-12-18

    The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).

  6. Protecting genomic data analytics in the cloud: state of the art and opportunities.

    PubMed

    Tang, Haixu; Jiang, Xiaoqian; Wang, Xiaofeng; Wang, Shuang; Sofia, Heidi; Fox, Dov; Lauter, Kristin; Malin, Bradley; Telenti, Amalio; Xiong, Li; Ohno-Machado, Lucila

    2016-10-13

    The outsourcing of genomic data into public cloud computing settings raises concerns over privacy and security. Significant advancements in secure computation methods have emerged over the past several years, but such techniques need to be rigorously evaluated for their ability to support the analysis of human genomic data in an efficient and cost-effective manner. With respect to public cloud environments, there are concerns about the inadvertent exposure of human genomic data to unauthorized users. In analyses involving multiple institutions, there is additional concern about data being used beyond agreed research scope and being prcoessed in untrused computational environments, which may not satisfy institutional policies. To systematically investigate these issues, the NIH-funded National Center for Biomedical Computing iDASH (integrating Data for Analysis, 'anonymization' and SHaring) hosted the second Critical Assessment of Data Privacy and Protection competition to assess the capacity of cryptographic technologies for protecting computation over human genomes in the cloud and promoting cross-institutional collaboration. Data scientists were challenged to design and engineer practical algorithms for secure outsourcing of genome computation tasks in working software, whereby analyses are performed only on encrypted data. They were also challenged to develop approaches to enable secure collaboration on data from genomic studies generated by multiple organizations (e.g., medical centers) to jointly compute aggregate statistics without sharing individual-level records. The results of the competition indicated that secure computation techniques can enable comparative analysis of human genomes, but greater efficiency (in terms of compute time and memory utilization) are needed before they are sufficiently practical for real world environments.

  7. Scalp marking for a craniotomy using a laser pointer during preoperative computed tomographic imaging: technical note.

    PubMed

    Kubo, S; Nakata, H; Sugauchi, Y; Yokota, N; Yoshimine, T

    2000-05-01

    The preoperative localization of superficial intracranial lesions is often necessary for accurate burr hole placement or craniotomy siting. It is not always easy, however, to localize the lesions over the scalp working only from computed tomographic images. We developed a simple method for such localization using a laser pointer during the preoperative computed tomographic examination. The angle of incidence, extending from a point on the scalp to the center of the computed tomographic image, is measured by the software included with the scanner. In the gantry, at the same angle as on the image, a laser is beamed from a handmade projector onto the patient's scalp toward the center of the gantry. The point illuminated on the patient's head corresponds to that on the image. The device and the method are described in detail herein. We applied this technique to mark the area for the craniotomy before surgery in five patients with superficial brain tumors. At the time of surgery, it was confirmed that the tumors were circumscribed precisely. The technique is easy to perform and useful in the preoperative planning for a craniotomy. In addition, the device is easily constructed and inexpensive.

  8. Computing with Beowulf

    NASA Technical Reports Server (NTRS)

    Cohen, Jarrett

    1999-01-01

    Parallel computers built out of mass-market parts are cost-effectively performing data processing and simulation tasks. The Supercomputing (now known as "SC") series of conferences celebrated its 10th anniversary last November. While vendors have come and gone, the dominant paradigm for tackling big problems still is a shared-resource, commercial supercomputer. Growing numbers of users needing a cheaper or dedicated-access alternative are building their own supercomputers out of mass-market parts. Such machines are generally called Beowulf-class systems after the 11th century epic. This modern-day Beowulf story began in 1994 at NASA's Goddard Space Flight Center. A laboratory for the Earth and space sciences, computing managers there threw down a gauntlet to develop a $50,000 gigaFLOPS workstation for processing satellite data sets. Soon, Thomas Sterling and Don Becker were working on the Beowulf concept at the University Space Research Association (USRA)-run Center of Excellence in Space Data and Information Sciences (CESDIS). Beowulf clusters mix three primary ingredients: commodity personal computers or workstations, low-cost Ethernet networks, and the open-source Linux operating system. One of the larger Beowulfs is Goddard's Highly-parallel Integrated Virtual Environment, or HIVE for short.

  9. RIACS FY2002 Annual Report

    NASA Technical Reports Server (NTRS)

    Leiner, Barry M.; Gross, Anthony R. (Technical Monitor)

    2002-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. Operated by the Universities Space Research Association (a non-profit university consortium), RIACS is located at the NASA Ames Research Center, Moffett Field, California. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in September 2003. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology (IT) Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1) Automated Reasoning for Autonomous Systems; 2) Human-Centered Computing; and 3) High Performance Computing and Networking. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains including aerospace technology, earth science, life sciences, and astrobiology. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  10. PoPLAR: Portal for Petascale Lifescience Applications and Research

    PubMed Central

    2013-01-01

    Background We are focusing specifically on fast data analysis and retrieval in bioinformatics that will have a direct impact on the quality of human health and the environment. The exponential growth of data generated in biology research, from small atoms to big ecosystems, necessitates an increasingly large computational component to perform analyses. Novel DNA sequencing technologies and complementary high-throughput approaches--such as proteomics, genomics, metabolomics, and meta-genomics--drive data-intensive bioinformatics. While individual research centers or universities could once provide for these applications, this is no longer the case. Today, only specialized national centers can deliver the level of computing resources required to meet the challenges posed by rapid data growth and the resulting computational demand. Consequently, we are developing massively parallel applications to analyze the growing flood of biological data and contribute to the rapid discovery of novel knowledge. Methods The efforts of previous National Science Foundation (NSF) projects provided for the generation of parallel modules for widely used bioinformatics applications on the Kraken supercomputer. We have profiled and optimized the code of some of the scientific community's most widely used desktop and small-cluster-based applications, including BLAST from the National Center for Biotechnology Information (NCBI), HMMER, and MUSCLE; scaled them to tens of thousands of cores on high-performance computing (HPC) architectures; made them robust and portable to next-generation architectures; and incorporated these parallel applications in science gateways with a web-based portal. Results This paper will discuss the various developmental stages, challenges, and solutions involved in taking bioinformatics applications from the desktop to petascale with a front-end portal for very-large-scale data analysis in the life sciences. Conclusions This research will help to bridge the gap between the rate of data generation and the speed at which scientists can study this data. The ability to rapidly analyze data at such a large scale is having a significant, direct impact on science achieved by collaborators who are currently using these tools on supercomputers. PMID:23902523

  11. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  12. Mechanical Computing Redux: Limitations at the Nanoscale

    NASA Astrophysics Data System (ADS)

    Liu, Tsu-Jae King

    2014-03-01

    Technology solutions for overcoming the energy efficiency limits of nanoscale complementary metal oxide semiconductor (CMOS) technology ultimately will be needed in order to address the growing issue of integrated-circuit chip power density. Off-state leakage current sets a fundamental lower limit in energy per operation for any voltage-level-based digital logic implemented with transistors (CMOS and beyond), which leads to practical limits for device density (i.e. cost) and operating frequency (i.e. system performance). Mechanical switches have zero off-state leakag and hence can overcome this fundamental limit. Contact adhesive force sets a lower limit for the switching energy of a mechanical switch, however, and also directly impacts its performance. This paper will review recent progress toward the development of nano-electro-mechanical relay technology and discuss remaining challenges for realizing the promise of mechanical computing for ultra-low-power computing. Supported by the Center for Energy Efficient Electronics Science (NSF Award 0939514).

  13. LHCb experience with LFC replication

    NASA Astrophysics Data System (ADS)

    Bonifazi, F.; Carbone, A.; Perez, E. D.; D'Apice, A.; dell'Agnello, L.; Duellmann, D.; Girone, M.; Re, G. L.; Martelli, B.; Peco, G.; Ricci, P. P.; Sapunenko, V.; Vagnoni, V.; Vitlacil, D.

    2008-07-01

    Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobyshev, A.; DeMar, P.; Grigaliunas, V.

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Ourmore » analysis will include examination into the following areas: Evolution of Tier1 centers to their current state Evolving data center networking models and how they apply to Tier1 centers Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers Trends in WAN data movement and emergence of software-defined WAN network capabilities Network virtualization« less

  15. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  16. A simple and efficient method for predicting protein-protein interaction sites.

    PubMed

    Higa, R H; Tozzi, C L

    2008-09-23

    Computational methods for predicting protein-protein interaction sites based on structural data are characterized by an accuracy between 70 and 80%. Some experimental studies indicate that only a fraction of the residues, forming clusters in the center of the interaction site, are energetically important for binding. In addition, the analysis of amino acid composition has shown that residues located in the center of the interaction site can be better discriminated from the residues in other parts of the protein surface. In the present study, we implement a simple method to predict interaction site residues exploiting this fact and show that it achieves a very competitive performance compared to other methods using the same dataset and criteria for performance evaluation (success rate of 82.1%).

  17. NASA. Marshall Space Flight Center Hydrostatic Bearing Activities

    NASA Technical Reports Server (NTRS)

    Benjamin, Theodore G.

    1991-01-01

    The basic approach for analyzing hydrostatic bearing flows at the Marshall Space Flight Center (MSFC) is briefly discussed. The Hydrostatic Bearing Team has responsibility for assessing and evaluating flow codes; evaluating friction, ignition, and galling effects; evaluating wear; and performing tests. The Office of Aerospace and Exploration Technology Turbomachinery Seals Tasks consist of tests and analysis. The MSFC in-house analyses utilize one-dimensional bulk-flow codes. Computational fluid dynamics (CFD) analysis is used to enhance understanding of bearing flow physics or to perform parametric analysis that are outside the bulk flow database. As long as the bulk flow codes are accurate enough for most needs, they will be utilized accordingly and will be supported by CFD analysis on an as-needed basis.

  18. Efficient workload management in geographically distributed data centers leveraging autoregressive models

    NASA Astrophysics Data System (ADS)

    Altomare, Albino; Cesario, Eugenio; Mastroianni, Carlo

    2016-10-01

    The opportunity of using Cloud resources on a pay-as-you-go basis and the availability of powerful data centers and high bandwidth connections are speeding up the success and popularity of Cloud systems, which is making on-demand computing a common practice for enterprises and scientific communities. The reasons for this success include natural business distribution, the need for high availability and disaster tolerance, the sheer size of their computational infrastructure, and/or the desire to provide uniform access times to the infrastructure from widely distributed client sites. Nevertheless, the expansion of large data centers is resulting in a huge rise of electrical power consumed by hardware facilities and cooling systems. The geographical distribution of data centers is becoming an opportunity: the variability of electricity prices, environmental conditions and client requests, both from site to site and with time, makes it possible to intelligently and dynamically (re)distribute the computational workload and achieve as diverse business goals as: the reduction of costs, energy consumption and carbon emissions, the satisfaction of performance constraints, the adherence to Service Level Agreement established with users, etc. This paper proposes an approach that helps to achieve the business goals established by the data center administrators. The workload distribution is driven by a fitness function, evaluated for each data center, which weighs some key parameters related to business objectives, among which, the price of electricity, the carbon emission rate, the balance of load among the data centers etc. For example, the energy costs can be reduced by using a "follow the moon" approach, e.g. by migrating the workload to data centers where the price of electricity is lower at that time. Our approach uses data about historical usage of the data centers and data about environmental conditions to predict, with the help of regressive models, the values of the parameters of the fitness function, and then to appropriately tune the weights assigned to the parameters in accordance to the business goals. Preliminary experimental results, presented in this paper, show encouraging benefits.

  19. From cosmos to connectomes: the evolution of data-intensive science.

    PubMed

    Burns, Randal; Vogelstein, Joshua T; Szalay, Alexander S

    2014-09-17

    The analysis of data requires computation: originally by hand and more recently by computers. Different models of computing are designed and optimized for different kinds of data. In data-intensive science, the scale and complexity of data exceeds the comfort zone of local data stores on scientific workstations. Thus, cloud computing emerges as the preeminent model, utilizing data centers and high-performance clusters, enabling remote users to access and query subsets of the data efficiently. We examine how data-intensive computational systems originally built for cosmology, the Sloan Digital Sky Survey (SDSS), are now being used in connectomics, at the Open Connectome Project. We list lessons learned and outline the top challenges we expect to face. Success in computational connectomics would drastically reduce the time between idea and discovery, as SDSS did in cosmology. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Assess/Mitigate Risk through the Use of Computer-Aided Software Engineering (CASE) Tools

    NASA Technical Reports Server (NTRS)

    Aguilar, Michael L.

    2013-01-01

    The NASA Engineering and Safety Center (NESC) was requested to perform an independent assessment of the mitigation of the Constellation Program (CxP) Risk 4421 through the use of computer-aided software engineering (CASE) tools. With the cancellation of the CxP, the assessment goals were modified to capture lessons learned and best practices in the use of CASE tools. The assessment goal was to prepare the next program for the use of these CASE tools. The outcome of the assessment is contained in this document.

  1. New project to support scientific collaboration electronically

    NASA Astrophysics Data System (ADS)

    Clauer, C. R.; Rasmussen, C. E.; Niciejewski, R. J.; Killeen, T. L.; Kelly, J. D.; Zambre, Y.; Rosenberg, T. J.; Stauning, P.; Friis-Christensen, E.; Mende, S. B.; Weymouth, T. E.; Prakash, A.; McDaniel, S. E.; Olson, G. M.; Finholt, T. A.; Atkins, D. E.

    A new multidisciplinary effort is linking research in the upper atmospheric and space, computer, and behavioral sciences to develop a prototype electronic environment for conducting team science worldwide. A real-world electronic collaboration testbed has been established to support scientific work centered around the experimental operations being conducted with instruments from the Sondrestrom Upper Atmospheric Research Facility in Kangerlussuaq, Greenland. Such group computing environments will become an important component of the National Information Infrastructure initiative, which is envisioned as the high-performance communications infrastructure to support national scientific research.

  2. A practical VEP-based brain-computer interface.

    PubMed

    Wang, Yijun; Wang, Ruiping; Gao, Xiaorong; Hong, Bo; Gao, Shangkai

    2006-06-01

    This paper introduces the development of a practical brain-computer interface at Tsinghua University. The system uses frequency-coded steady-state visual evoked potentials to determine the gaze direction of the user. To ensure more universal applicability of the system, approaches for reducing user variation on system performance have been proposed. The information transfer rate (ITR) has been evaluated both in the laboratory and at the Rehabilitation Center of China, respectively. The system has been proved to be applicable to > 90% of people with a high ITR in living environments.

  3. Computer-Aided Diagnosis of Breast Cancer: A Multi-Center Demonstrator

    DTIC Science & Technology

    1998-10-01

    Artificial Neural Network (ANN) approach to computer aided diagnosis of breast cancer from mammographic findings. An ANN has been developed to provide support for the clinical decision to perform breast biopsy. The system is designed to aid in the decision to biopsy those patients who have suspicious mammographic findings. The decision to biopsy can be viewed as a two stage process: 1)the mammographer views the mammogram and determines the presence or absence of image features such as calcifications and masses, 2) the presence and description of these features

  4. Solar heating and cooling system installed at RKL Controls Company, Lumberton, New Jersey

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The final results of the design and operation of a computer controlled solar heated and cooled 40,000 square foot manufacturing building, sales office, and computer control center/display room are summarized. The system description, test data, major problems and resolutions, performance, operation and maintenance manual, equipment manufacturers' literature, and as-built drawings are presented. The solar system is composed of 6,000 square feet of flat plate collectors, external above ground storage subsystem, controls, absorption chiller, heat recovery, and a cooling tower.

  5. Merging the Machines of Modern Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Laura; Collins, Jim

    Two recent projects have harnessed supercomputing resources at the US Department of Energy’s Argonne National Laboratory in a novel way to support major fusion science and particle collider experiments. Using leadership computing resources, one team ran fine-grid analysis of real-time data to make near-real-time adjustments to an ongoing experiment, while a second team is working to integrate Argonne’s supercomputers into the Large Hadron Collider/ATLAS workflow. Together these efforts represent a new paradigm of the high-performance computing center as a partner in experimental science.

  6. HPC Annual Report 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennig, Yasmin

    Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructuremore » and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.« less

  7. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE PAGES

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...

    2018-03-22

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  8. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  9. Gaussian Radial Basis Function for Efficient Computation of Forest Indirect Illumination

    NASA Astrophysics Data System (ADS)

    Abbas, Fayçal; Babahenini, Mohamed Chaouki

    2018-06-01

    Global illumination of natural scenes in real time like forests is one of the most complex problems to solve, because the multiple inter-reflections between the light and material of the objects composing the scene. The major problem that arises is the problem of visibility computation. In fact, the computing of visibility is carried out for all the set of leaves visible from the center of a given leaf, given the enormous number of leaves present in a tree, this computation performed for each leaf of the tree which also reduces performance. We describe a new approach that approximates visibility queries, which precede in two steps. The first step is to generate point cloud representing the foliage. We assume that the point cloud is composed of two classes (visible, not-visible) non-linearly separable. The second step is to perform a point cloud classification by applying the Gaussian radial basis function, which measures the similarity in term of distance between each leaf and a landmark leaf. It allows approximating the visibility requests to extract the leaves that will be used to calculate the amount of indirect illumination exchanged between neighbor leaves. Our approach allows efficiently treat the light exchanges in the scene of a forest, it allows a fast computation and produces images of good visual quality, all this takes advantage of the immense power of computation of the GPU.

  10. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    PubMed Central

    Pinthong, Watthanai; Muangruen, Panya

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555

  11. Enabling High-performance Interactive Geoscience Data Analysis Through Data Placement and Movement Optimization

    NASA Astrophysics Data System (ADS)

    Zhu, F.; Yu, H.; Rilee, M. L.; Kuo, K. S.; Yu, L.; Pan, Y.; Jiang, H.

    2017-12-01

    Since the establishment of data archive centers and the standardization of file formats, scientists are required to search metadata catalogs for data needed and download the data files to their local machines to carry out data analysis. This approach has facilitated data discovery and access for decades, but it inevitably leads to data transfer from data archive centers to scientists' computers through low-bandwidth Internet connections. Data transfer becomes a major performance bottleneck in such an approach. Combined with generally constrained local compute/storage resources, they limit the extent of scientists' studies and deprive them of timely outcomes. Thus, this conventional approach is not scalable with respect to both the volume and variety of geoscience data. A much more viable solution is to couple analysis and storage systems to minimize data transfer. In our study, we compare loosely coupled approaches (exemplified by Spark and Hadoop) and tightly coupled approaches (exemplified by parallel distributed database management systems, e.g., SciDB). In particular, we investigate the optimization of data placement and movement to effectively tackle the variety challenge, and boost the popularization of parallelization to address the volume challenge. Our goal is to enable high-performance interactive analysis for a good portion of geoscience data analysis exercise. We show that tightly coupled approaches can concentrate data traffic between local storage systems and compute units, and thereby optimizing bandwidth utilization to achieve a better throughput. Based on our observations, we develop a geoscience data analysis system that tightly couples analysis engines with storages, which has direct access to the detailed map of data partition locations. Through an innovation data partitioning and distribution scheme, our system has demonstrated scalable and interactive performance in real-world geoscience data analysis applications.

  12. 78 FR 45513 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-29

    ...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... individual's privacy, and would result in additional delay in determining eligibility and, if applicable, the... Defense. NOTICE OF A COMPUTER MATCHING PROGRAM AMONG THE DEFENSE MANPOWER DATA CENTER, THE DEPARTMENT OF...

  13. 20. SITE BUILDING 002 SCANNER BUILDING IN COMPUTER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    20. SITE BUILDING 002 - SCANNER BUILDING - IN COMPUTER ROOM LOOKING AT "CONSOLIDATED MAINTENANCE OPERATIONS CENTER" JOB AREA AND OPERATION WORK CENTER. TASKS INCLUDE RADAR MAINTENANCE, COMPUTER MAINTENANCE, CYBER COMPUTER MAINTENANCE AND RELATED ACTIVITIES. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  14. The new Venezuelan national control center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beluche, C.M.

    1984-07-01

    This paper describes the actual status of the Venezuelan Electric Power System, the different operational areas involved, and finally the Interconnected System. Due to the development in the Power System, and in order to perform a more reliable and economic operation, it was established the need to implement a new supervisory system, for the Interconnected System Control Center. Also is described the system for the new control center for Oficina de Operacion de Sistemas Interconectados (OPSIS). OPSIS will monitor the Bulk Transmission Network (BTN), and will exchange information and control with the Regional Dispatch Centers through the computer network. Themore » Regional Dispatch Center (RDC) shall have the responsibility to accomplish the control order. The Bulk Transmissions System consists of that portion of the system from 800 KV to 230 KV. A total of six RDCs will be included in the system.« less

  15. Lessons learned: mobile device encryption in the academic medical center.

    PubMed

    Kusche, Kristopher P

    2009-01-01

    The academic medical center is faced with the unique challenge of meeting the multi-faceted needs of both a modern healthcare organization and an academic institution, The need for security to protect patient information must be balanced by the academic freedoms expected in the college setting. The Albany Medical Center, consisting of the Albany Medical College and the Albany Medical Center Hospital, was challenged with implementing a solution that would preserve the availability, integrity and confidentiality of business, patient and research data stored on mobile devices. To solve this problem, Albany Medical Center implemented a mobile encryption suite across the enterprise. Such an implementation comes with complexities, from performance across multiple generations of computers and operating systems, to diversity of application use mode and end user adoption, all of which requires thoughtful policy and standards creation, understanding of regulations, and a willingness and ability to work through such diverse needs.

  16. Cloud Computing Applications in Support of Earth Science Activities at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Limaye, Ashutosh S.; Srikishen, Jayanthi

    2011-01-01

    Currently, the NASA Nebula Cloud Computing Platform is available to Agency personnel in a pre-release status as the system undergoes a formal operational readiness review. Over the past year, two projects within the Earth Science Office at NASA Marshall Space Flight Center have been investigating the performance and value of Nebula s "Infrastructure as a Service", or "IaaS" concept and applying cloud computing concepts to advance their respective mission goals. The Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique NASA satellite observations and weather forecasting capabilities for use within the operational forecasting community through partnerships with NOAA s National Weather Service (NWS). SPoRT has evaluated the performance of the Weather Research and Forecasting (WRF) model on virtual machines deployed within Nebula and used Nebula instances to simulate local forecasts in support of regional forecast studies of interest to select NWS forecast offices. In addition to weather forecasting applications, rapidly deployable Nebula virtual machines have supported the processing of high resolution NASA satellite imagery to support disaster assessment following the historic severe weather and tornado outbreak of April 27, 2011. Other modeling and satellite analysis activities are underway in support of NASA s SERVIR program, which integrates satellite observations, ground-based data and forecast models to monitor environmental change and improve disaster response in Central America, the Caribbean, Africa, and the Himalayas. Leveraging SPoRT s experience, SERVIR is working to establish a real-time weather forecasting model for Central America. Other modeling efforts include hydrologic forecasts for Kenya, driven by NASA satellite observations and reanalysis data sets provided by the broader meteorological community. Forecast modeling efforts are supplemented by short-term forecasts of convective initiation, determined by geostationary satellite observations processed on virtual machines powered by Nebula.

  17. Design for Run-Time Monitor on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.

  18. Human computer interface guide, revision A

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Human Computer Interface Guide, SSP 30540, is a reference document for the information systems within the Space Station Freedom Program (SSFP). The Human Computer Interface Guide (HCIG) provides guidelines for the design of computer software that affects human performance, specifically, the human-computer interface. This document contains an introduction and subparagraphs on SSFP computer systems, users, and tasks; guidelines for interactions between users and the SSFP computer systems; human factors evaluation and testing of the user interface system; and example specifications. The contents of this document are intended to be consistent with the tasks and products to be prepared by NASA Work Package Centers and SSFP participants as defined in SSP 30000, Space Station Program Definition and Requirements Document. The Human Computer Interface Guide shall be implemented on all new SSFP contractual and internal activities and shall be included in any existing contracts through contract changes. This document is under the control of the Space Station Control Board, and any changes or revisions will be approved by the deputy director.

  19. Exploring Midwives' Need and Intention to Adopt Electronic Integrated Antenatal Care

    PubMed Central

    Markam, Hosizah; Hochheiser, Harry; Kuntoro, Kuntoro; Notobroto, Hari Basuki

    2018-01-01

    Documentation requirements for the Indonesian integrated antenatal care (ANC) program suggest the need for electronic systems to address gaps in existing paper documentation practices. Our goals were to quantify midwives' documentation completeness in a primary healthcare center, understand documentation challenges, develop a tool, and assess intention to use the tool. We analyzed existing ANC records in a primary healthcare center in Bangkalan, East Java, and conducted interviews with stakeholders to understand needs for an electronic system in support of ANC. Development of the web-based Electronic Integrated ANC (e-iANC) system used the System Development Life Cycle method. Training on the use of the system was held in the computer laboratory for 100 midwives chosen from four primary healthcare centers in each of five regions. The Unified Theory of Acceptance and Use of Technology (UTAUT) questionnaire was used to assess their intention to adopt e-iANC. The midwives' intention to adopt e-iANC was significantly influenced by performance expectancy, effort expectancy and facilitating conditions. Age, education level, and computer literacy did not significantly moderate the effects of performance expectancy and effort expectancy on adoption intention. The UTAUT results indicated that the factors that might influence intention to adopt e-iANC are potentially addressable. Results suggest that e-iANC might well be accepted by midwives. PMID:29618961

  20. Multimode and single-mode fibers for data center and high-performance computing applications

    NASA Astrophysics Data System (ADS)

    Bickham, Scott R.

    2016-03-01

    Data center (DC) and high performance computing (HPC) applications have traditionally used a combination of copper, multimode fiber and single-mode fiber interconnects with relative percentages that depend on factors such as the line rate, reach and connectivity costs. The balance between these transmission media has increasingly shifted towards optical fiber due to the reach constraints of copper at data rates of 10 Gb/s and higher. The percentage of single-mode fiber deployed in the DC has also grown slightly since 2014, coinciding with the emergence of mega DCs with extended distance needs beyond 100 m. This trend will likely continue in the next few years as DCs expand their capacity from 100G to 400G, increase the physical size of their facilities and begin to utilize silicon-photonics transceiver technology. However there is a still a need for the low-cost and high-density connectivity, and this is sustaining the deployment of multimode fiber for links <= 100 m. In this paper, we discuss options for single-mode and multimode fibers in DCs and HPCs and introduce a reduced diameter multimode fiber concept which provides intra-and inter-rack connectivity as well as compatibility with silicon-photonic transceivers operating at 1310 nm. We also discuss the trade-offs between single-mode fiber attributes such as bend-insensitivity, attenuation and mode field diameter and their roles in capacity and connectivity in data centers.

  1. Modeling Real-Time Coordination of Distributed Expertise and Event Response in NASA Mission Control Center Operations

    NASA Astrophysics Data System (ADS)

    Onken, Jeffrey

    This dissertation introduces a multidisciplinary framework for the enabling of future research and analysis of alternatives for control centers for real-time operations of safety-critical systems. The multidisciplinary framework integrates functional and computational models that describe the dynamics in fundamental concepts of previously disparate engineering and psychology research disciplines, such as group performance and processes, supervisory control, situation awareness, events and delays, and expertise. The application in this dissertation is the real-time operations within the NASA Mission Control Center in Houston, TX. This dissertation operationalizes the framework into a model and simulation, which simulates the functional and computational models in the framework according to user-configured scenarios for a NASA human-spaceflight mission. The model and simulation generates data according to the effectiveness of the mission-control team in supporting the completion of mission objectives and detecting, isolating, and recovering from anomalies. Accompanying the multidisciplinary framework is a proof of concept, which demonstrates the feasibility of such a framework. The proof of concept demonstrates that variability occurs where expected based on the models. The proof of concept also demonstrates that the data generated from the model and simulation is useful for analyzing and comparing MCC configuration alternatives because an investigator can give a diverse set of scenarios to the simulation and the output compared in detail to inform decisions about the effect of MCC configurations on mission operations performance.

  2. Computer simulation of multiple pilots flying a modern high performance helicopter

    NASA Technical Reports Server (NTRS)

    Zipf, Mark E.; Vogt, William G.; Mickle, Marlin H.; Hoelzeman, Ronald G.; Kai, Fei; Mihaloew, James R.

    1988-01-01

    A computer simulation of a human response pilot mechanism within the flight control loop of a high-performance modern helicopter is presented. A human response mechanism, implemented by a low order, linear transfer function, is used in a decoupled single variable configuration that exploits the dominant vehicle characteristics by associating cockpit controls and instrumentation with specific vehicle dynamics. Low order helicopter models obtained from evaluations of the time and frequency domain responses of a nonlinear simulation model, provided by NASA Lewis Research Center, are presented and considered in the discussion of the pilot development. Pilot responses and reactions to test maneuvers are presented and discussed. Higher level implementation, using the pilot mechanisms, are discussed and considered for their use in a comprehensive control structure.

  3. Comparison of Analytical Predictions and Experimental Results for a Dual Brayton Power System (Discussion on Test Hardware and Computer Model for a Dual Brayton System)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.

    2007-01-01

    NASA Glenn Research Center (GRC) contracted Barber-Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.

  4. Center of Excellence for Geospatial Information Science research plan 2013-18

    USGS Publications Warehouse

    Usery, E. Lynn

    2013-01-01

    The U.S. Geological Survey Center of Excellence for Geospatial Information Science (CEGIS) was created in 2006 and since that time has provided research primarily in support of The National Map. The presentations and publications of the CEGIS researchers document the research accomplishments that include advances in electronic topographic map design, generalization, data integration, map projections, sea level rise modeling, geospatial semantics, ontology, user-centered design, volunteer geographic information, and parallel and grid computing for geospatial data from The National Map. A research plan spanning 2013–18 has been developed extending the accomplishments of the CEGIS researchers and documenting new research areas that are anticipated to support The National Map of the future. In addition to extending the 2006–12 research areas, the CEGIS research plan for 2013–18 includes new research areas in data models, geospatial semantics, high-performance computing, volunteered geographic information, crowdsourcing, social media, data integration, and multiscale representations to support the Three-Dimensional Elevation Program (3DEP) and The National Map of the future of the U.S. Geological Survey.

  5. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    NASA Astrophysics Data System (ADS)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  6. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shujia; Duffy, Daniel; Clune, Thomas

    The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less

  7. Optical information processing at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Bualat, Maria G.; Cho, Young C.; Downie, John D.; Gary, Charles K.; Ma, Paul W.; Ozcan, Meric; Pryor, Anna H.; Spirkovska, Lilly

    1993-01-01

    The combination of analog optical processors with digital electronic systems offers the potential of tera-OPS computational performance, while often requiring less power and weight relative to all-digital systems. NASA is working to develop and demonstrate optical processing techniques for on-board, real time science and mission applications. Current research areas and applications under investigation include optical matrix processing for space structure vibration control and the analysis of Space Shuttle Main Engine plume spectra, optical correlation-based autonomous vision for robotic vehicles, analog computation for robotic path planning, free-space optical interconnections for information transfer within digital electronic computers, and multiplexed arrays of fiber optic interferometric sensors for acoustic and vibration measurements.

  8. CNSFV code development, virtual zone Navier-Stokes computations of oscillating control surfaces and computational support of the laminar flow supersonic wind tunnel

    NASA Technical Reports Server (NTRS)

    Klopfer, Goetz H.

    1993-01-01

    The work performed during the past year on this cooperative agreement covered two major areas and two lesser ones. The two major items included further development and validation of the Compressible Navier-Stokes Finite Volume (CNSFV) code and providing computational support for the Laminar Flow Supersonic Wind Tunnel (LFSWT). The two lesser items involve a Navier-Stokes simulation of an oscillating control surface at transonic speeds and improving the basic algorithm used in the CNSFV code for faster convergence rates and more robustness. The work done in all four areas is in support of the High Speed Research Program at NASA Ames Research Center.

  9. UTDallas Offline Computing System for B Physics with the Babar Experiment at SLAC

    NASA Astrophysics Data System (ADS)

    Benninger, Tracy L.

    1998-10-01

    The University of Texas at Dallas High Energy Physics group is building a high performance, large storage computing system for B physics research with the BaBar experiment (``factory'') at the Stanford Linear Accelerator Center. The goal of this system is to analyze one terabyte of complex Event Store data from BaBar in one to two days. The foundation of the computing system is a Sun E6000 Enterprise multiprocessor system, with additions of a Sun StorEdge L1800 Tape Library, a Sun Workstation for processing batch jobs, staging disks and interface cards. The design considerations, current status, projects underway, and possible upgrade paths will be discussed.

  10. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  11. Simulation Packages Expand Aircraft Design Options

    NASA Technical Reports Server (NTRS)

    2013-01-01

    In 2001, NASA released a new approach to computational fluid dynamics that allows users to perform automated analysis on complex vehicle designs. In 2010, Palo Alto, California-based Desktop Aeronautics acquired a license from Ames Research Center to sell the technology. Today, the product assists organizations in the design of subsonic aircraft, space planes, spacecraft, and high speed commercial jets.

  12. Multidimensional computer simulation of Stirling cycle engines

    NASA Technical Reports Server (NTRS)

    Hall, Charles A.; Porsching, Thomas A.

    1992-01-01

    This report summarizes the activities performed under NASA-Grant NAG3-1097 during 1991. During that period, work centered on the following tasks: (1) to investigate more effective solvers for ALGAE; (2) to modify the plotting package for ALGAE; and (3) to validate ALGAE by simulating oscillating flow problems similar to those studied by Kurzweg and Ibrahim.

  13. Technical Assessment: Integrated Photonics

    DTIC Science & Technology

    2015-10-01

    in global internet protocol traffic as a function of time by local access technology. Photonics continues to play a critical role in enabling this...communication networks. This has enabled services like the internet , high performance computing, and power-efficient large-scale data centers. The...signal processing, quantum information science, and optics for free space applications. However major obstacles challenge the implementation of

  14. Steam Turbines

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Turbonetics Energy, Inc.'s steam turbines are used as power generating systems in the oil and gas, chemical, pharmaceuticals, metals and mining, and pulp and paper industries. The Turbonetics line benefited from use of NASA research data on radial inflow steam turbines and from company contact with personnel of Lewis Research Center, also use of Lewis-developed computer programs to determine performance characteristics of turbines.

  15. MERCATOR: Methods and Realization for Control of the Attitude and the Orbit of spacecraft

    NASA Technical Reports Server (NTRS)

    Tavernier, Gilles; Campan, Genevieve

    1993-01-01

    Since 1974, CNES has been involved in geostationary positioning. Among different entities participating in operations and their preparation, the Flight Dynamics Center (FDC) is in charge of performing the following tasks: orbit determination; attitude determination; computation, monitoring, and calibration of orbit maneuvers; computation, monitoring, and calibration of attitude maneuvers; and operational predictions. In order to fulfill this mission, the FDC receives telemetry from the satellite and localization measurements from ground stations (e.g., CNES, NASA, INTELSAT). These data are processed by space dynamics programs integrated in the MERCATOR system which is run on SUN workstations (UNIX O.S.). The main features of MERCATOR are redundancy, modularity, and flexibility: efficient, flexible, and user friendly man-machine interface; and four identical SUN stations redundantly linked in an Ethernet network. Each workstation can perform all the tasks from data acquisition to computation results dissemination through a video network. A team of four engineers can handle the space mechanics aspects of a complete geostationary positioning from the injection into a transfer orbit to the final maneuvers in the station-keeping window. MERCATOR has been or is to be used for operations related to more than ten geostationary positionings. Initially developed for geostationary satellites, MERCATOR's methodology was also used for satellite control centers and can be applied to a wide range of satellites and to future manned missions.

  16. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  17. Performance evaluation of the Engineering Analysis and Data Systems (EADS) 2

    NASA Technical Reports Server (NTRS)

    Debrunner, Linda S.

    1994-01-01

    The Engineering Analysis and Data System (EADS)II (1) was installed in March 1993 to provide high performance computing for science and engineering at Marshall Space Flight Center (MSFC). EADS II increased the computing capabilities over the existing EADS facility in the areas of throughput and mass storage. EADS II includes a Vector Processor Compute System (VPCS), a Virtual Memory Compute System (CFS), a Common Output System (COS), as well as Image Processing Station, Mini Super Computers, and Intelligent Workstations. These facilities are interconnected by a sophisticated network system. This work considers only the performance of the VPCS and the CFS. The VPCS is a Cray YMP. The CFS is implemented on an RS 6000 using the UniTree Mass Storage System. To better meet the science and engineering computing requirements, EADS II must be monitored, its performance analyzed, and appropriate modifications for performance improvement made. Implementing this approach requires tool(s) to assist in performance monitoring and analysis. In Spring 1994, PerfStat 2.0 was purchased to meet these needs for the VPCS and the CFS. PerfStat(2) is a set of tools that can be used to analyze both historical and real-time performance data. Its flexible design allows significant user customization. The user identifies what data is collected, how it is classified, and how it is displayed for evaluation. Both graphical and tabular displays are supported. The capability of the PerfStat tool was evaluated, appropriate modifications to EADS II to optimize throughput and enhance productivity were suggested and implemented, and the effects of these modifications on the systems performance were observed. In this paper, the PerfStat tool is described, then its use with EADS II is outlined briefly. Next, the evaluation of the VPCS, as well as the modifications made to the system are described. Finally, conclusions are drawn and recommendations for future worked are outlined.

  18. Vibronic Coupling Analysis of the Ligand-Centered Phosphorescence of Gas-Phase Gd(III) and Lu(III) 9-Oxophenalen-1-one Complexes.

    PubMed

    Chmela, Jiří; Greisch, Jean-François; Harding, Michael E; Klopper, Wim; Kappes, Manfred M; Schooss, Detlef

    2018-03-08

    The gas-phase laser-induced photoluminescence of cationic mononuclear gadolinium and lutetium complexes involving two 9-oxophenalen-1-one ligands is reported. Performing measurements at a temperature of 83 K enables us to resolve vibronic transitions. Via comparison to Franck-Condon computations, the main vibrational contributions to the ligand-centered phosphorescence are determined to involve rocking, wagging, and stretching of the 9-oxophenalen-1-one-lanthanoid coordination in the low-energy range, intraligand bending, and stretching in the medium- to high-energy range, rocking of the carbonyl and methine groups, and C-H stretching beyond. Whereas Franck-Condon calculations based on density-functional harmonic frequency computations reproduce the main features of the vibrationally resolved emission spectra, the absolute transition energies as determined by density functional theory are off by several thousand wavenumbers. This discrepancy is found to remain at higher computational levels. The relative energy of the Gd(III) and Lu(III) emission bands is only reproduced at the coupled-cluster singles and doubles level and beyond.

  19. Theory and operation of the real-time data acquisition system for the NASA-LaRC differential absorption lidar (DIAL)

    NASA Technical Reports Server (NTRS)

    Butler, C.

    1986-01-01

    The improvement of computer hardware and software of the NASA Multipurpose Differential Absorption Lidar (DIAL) system is documented. The NASA DIAL system is undergoing development and experimental deployment at NASA Langley Research Center for the remote measurement of atmospheric trace gas concentrations from ground and aircraft platforms. A viable DIAL system was developed capable of remotely measuring O3 and H2O concentrations from an aircraft platform. Test flights of the DIAL system were successfully performed onboard the NASA Goddard Flight Center Electra aircraft from 1980 to 1985. The DIAL Data Acquisition System has undergone a number of improvements over the past few years. These improvements have now been field tested. The theory behind a real time computer system as it applies to the needs of the DIAL system is discussed. This report is designed to be used as an operational manual for the DIAL DAS.

  20. Design of Control Plane Architecture Based on Cloud Platform and Experimental Network Demonstration for Multi-domain SDON

    NASA Astrophysics Data System (ADS)

    Li, Ming; Yin, Hongxi; Xing, Fangyuan; Wang, Jingchao; Wang, Honghuan

    2016-02-01

    With the features of network virtualization and resource programming, Software Defined Optical Network (SDON) is considered as the future development trend of optical network, provisioning a more flexible, efficient and open network function, supporting intraconnection and interconnection of data centers. Meanwhile cloud platform can provide powerful computing, storage and management capabilities. In this paper, with the coordination of SDON and cloud platform, a multi-domain SDON architecture based on cloud control plane has been proposed, which is composed of data centers with database (DB), path computation element (PCE), SDON controller and orchestrator. In addition, the structure of the multidomain SDON orchestrator and OpenFlow-enabled optical node are proposed to realize the combination of centralized and distributed effective management and control platform. Finally, the functional verification and demonstration are performed through our optical experiment network.

  1. Launch processing system transition from development to operation

    NASA Technical Reports Server (NTRS)

    Paul, H. C.

    1977-01-01

    The Launch Processing System has been under development at Kennedy Space Center since 1973. A prototype system was developed and delivered to Marshall Space Flight Center for Solid Rocket Booster checkout in July 1976. The first production hardware arrived in late 1976. The System uses a distributed computer network for command and monitoring and is supported by a dual large scale computer system for 'off line' processing. A high level of automation is anticipated for Shuttle and Payload testing and launch operations to gain the advantages of short turnaround capability, repeatability of operations, and minimization of operations and maintenance (O&M) manpower. Learning how to efficiently apply the system is our current problem. We are searching for more effective ways to convey LPS system performance characteristics from the designer to a large number of users. Once we have done this, we can realize the advantages of LPS system design.

  2. Real-Time Detection and Measurement of Eye Features from Color Images

    PubMed Central

    Borza, Diana; Darabant, Adrian Sergiu; Danescu, Radu

    2016-01-01

    The accurate extraction and measurement of eye features is crucial to a variety of domains, including human-computer interaction, biometry, and medical research. This paper presents a fast and accurate method for extracting multiple features around the eyes: the center of the pupil, the iris radius, and the external shape of the eye. These features are extracted using a multistage algorithm. On the first stage the pupil center is localized using a fast circular symmetry detector and the iris radius is computed using radial gradient projections, and on the second stage the external shape of the eye (of the eyelids) is determined through a Monte Carlo sampling framework based on both color and shape information. Extensive experiments performed on a different dataset demonstrate the effectiveness of our approach. In addition, this work provides eye annotation data for a publicly-available database. PMID:27438838

  3. System analysis for the Huntsville Operational Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Mauldin, J.

    1984-01-01

    The Huntsville Operations Support Center (HOSC) is a distributed computer system used to provide real time data acquisition, analysis and display during NASA space missions and to perform simulation and study activities during non-mission times. The primary purpose is to provide a HOSC system simulation model that is used to investigate the effects of various HOSC system configurations. Such a model would be valuable in planning the future growth of HOSC and in ascertaining the effects of data rate variations, update table broadcasting and smart display terminal data requirements on the HOSC HYPERchannel network system. A simulation model was developed in PASCAL and results of the simulation model for various system configuraions were obtained. A tutorial of the model is presented and the results of simulation runs are presented. Some very high data rate situations were simulated to observe the effects of the HYPERchannel switch over from contention to priority mode under high channel loading.

  4. HPCC and the National Information Infrastructure: an overview.

    PubMed Central

    Lindberg, D A

    1995-01-01

    The National Information Infrastructure (NII) or "information superhighway" is a high-priority federal initiative to combine communications networks, computers, databases, and consumer electronics to deliver information services to all U.S. citizens. The NII will be used to improve government and social services while cutting administrative costs. Operated by the private sector, the NII will rely on advanced technologies developed under the direction of the federal High Performance Computing and Communications (HPCC) Program. These include computing systems capable of performing trillions of operations (teraops) per second and networks capable of transmitting billions of bits (gigabits) per second. Among other activities, the HPCC Program supports the national supercomputer research centers, the federal portion of the Internet, and the development of interface software, such as Mosaic, that facilitates access to network information services. Health care has been identified as a critical demonstration area for HPCC technology and an important application area for the NII. As an HPCC participant, the National Library of Medicine (NLM) assists hospitals and medical centers to connect to the Internet through projects directed by the Regional Medical Libraries and through an Internet Connections Program cosponsored by the National Science Foundation. In addition to using the Internet to provide enhanced access to its own information services, NLM sponsors health-related applications of HPCC technology. Examples include the "Visible Human" project and recently awarded contracts for test-bed networks to share patient data and medical images, telemedicine projects to provide consultation and medical care to patients in rural areas, and advanced computer simulations of human anatomy for training in "virtual surgery." PMID:7703935

  5. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    NASA Astrophysics Data System (ADS)

    Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.

    2012-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  6. Performance Benefits for a Turboshaft Engine Using Nonlinear Engine Control Technology Investigated

    NASA Technical Reports Server (NTRS)

    Jones, Scott M.

    2004-01-01

    The potential benefits of nonlinear engine control technology applied to a General Electric T700 helicopter engine were investigated. This technology is being developed by the U.S. Navy SPAWAR Systems Center for a variety of applications. When used as a means of active stability control, nonlinear engine control technology uses sensors and small amounts of injected air to allow compressors to operate with reduced stall margin, which can improve engine pressure ratio. The focus of this study was to determine the best achievable reduction in fuel consumption for the T700 turboshaft engine. A customer deck (computer code) was provided by General Electric to calculate the T700 engine performance, and the NASA Glenn Research Center used this code to perform the analysis. The results showed a 2- to 5-percent reduction in brake specific fuel consumption (BSFC) at the three Sikorsky H-60 helicopter operating points of cruise, loiter, and hover.

  7. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  8. An opportunity cost model of subjective effort and task performance

    PubMed Central

    Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus

    2013-01-01

    Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775

  9. FY 72 Computer Utilization at the Transportation Systems Center

    DOT National Transportation Integrated Search

    1972-08-01

    The Transportation Systems Center currently employs a medley of on-site and off-site computer systems to obtain the computational support it requires. Examination of the monthly User Accountability Reports for FY72 indicated that during the fiscal ye...

  10. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  11. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  12. Institute for scientific computing research;fiscal year 1999 annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D

    2000-03-28

    Large-scale scientific computation, and all of the disciplines that support it and help to validate it, have been placed at the focus of Lawrence Livermore National Laboratory by the Accelerated Strategic Computing Initiative (ASCI). The Laboratory operates the computer with the highest peak performance in the world and has undertaken some of the largest and most compute-intensive simulations ever performed. Computers at the architectural extremes, however, are notoriously difficult to use efficiently. Even such successes as the Laboratory's two Bell Prizes awarded in November 1999 only emphasize the need for much better ways of interacting with the results of large-scalemore » simulations. Advances in scientific computing research have, therefore, never been more vital to the core missions of the Laboratory than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, the Laboratory must engage researchers at many academic centers of excellence. In FY 1999, the Institute for Scientific Computing Research (ISCR) has expanded the Laboratory's bridge to the academic community in the form of collaborative subcontracts, visiting faculty, student internships, a workshop, and a very active seminar series. ISCR research participants are integrated almost seamlessly with the Laboratory's Center for Applied Scientific Computing (CASC), which, in turn, addresses computational challenges arising throughout the Laboratory. Administratively, the ISCR flourishes under the Laboratory's University Relations Program (URP). Together with the other four Institutes of the URP, it must navigate a course that allows the Laboratory to benefit from academic exchanges while preserving national security. Although FY 1999 brought more than its share of challenges to the operation of an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and well worth the continued effort. A change of administration for the ISCR occurred during FY 1999. Acting Director John Fitzgerald retired from LLNL in August after 35 years of service, including the last two at helm of the ISCR. David Keyes, who has been a regular visitor in conjunction with ASCI scalable algorithms research since October 1997, overlapped with John for three months and serves half-time as the new Acting Director.« less

  13. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 2, Issue 2, 2011

    DTIC Science & Technology

    2011-01-01

    fixed (i.e., no flapping). The simulation was performed at sea level conditions with a pressure of 101 kPa and a density of 1.23 kg/m3. The air speed...Hardening Behavior in Au Nanopillar Microplasticity . IJMCE 5 (3&4) 287–294. (2007) 5. S. J. Plimpton. Fast Parallel Algorithms for Short- Range Molecular...such as crude oil underwa- ter. Scattering is also used for sea floor mapping. For example, communications companies laying underwa- ter fiber optic

  14. The Space Station air revitalization subsystem design concept

    NASA Technical Reports Server (NTRS)

    Ray, C. D.; Ogle, K. Y.; Tipps, R. W.; Carrasquillo, R. L.; Wieland, P.

    1987-01-01

    The current status of the Space Station (SS) Environmental Control and Life Support System (ECLSS) Air Revitalization Subsystem (ARS) design is outlined. ARS performance requirements are provided, along with subsystem options for each ARS function and selected evaluations of the relative merits of each subsystem. Detailed computer models that have been developed to analyze individual subsystem performance capabilities are also discussed. A summary of ARS subsystem level testing planned and completed by NASA Marshall Space Flight Center (MSFC) is given.

  15. Comments on ``Use of conditional simulation in nuclear waste site performance assessment`` by Carol Gotway

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downing, D.J.

    1993-10-01

    This paper discusses Carol Gotway`s paper, ``The Use of Conditional Simulation in Nuclear Waste Site Performance Assessment.`` The paper centers on the use of conditional simulation and the use of geostatistical methods to simulate an entire field of values for subsequent use in a complex computer model. The issues of sampling designs for geostatistics, semivariogram estimation and anisotropy, turning bands method for random field generation, and estimation of the comulative distribution function are brought out.

  16. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  17. NASA Environmentally Responsible Aviation Hybrid Wing Body Flow-Through Nacelle Wind Tunnel CFD

    NASA Technical Reports Server (NTRS)

    Schuh, Michael J.; Garcia, Jospeh A.; Carter, Melissa B.; Deere, Karen A.; Stremel, Paul M.; Tompkins, Daniel M.

    2016-01-01

    Wind tunnel tests of a 5.75% scale model of the Boeing Hybrid Wing Body (HWB) configuration were conducted in the NASA Langley Research Center (LaRC) 14'x22' and NASA Ames Research Center (ARC) 40'x80' low speed wind tunnels as part of the NASA Environmentally Responsible Aviation (ERA) Project. Computational fluid dynamics (CFD) simulations of the flow-through nacelle (FTN) configuration of this model were performed before and after the testing. This paper presents a summary of the experimental and CFD results for the model in the cruise and landing configurations.

  18. NASA Environmentally Responsible Aviation Hybrid Wing Body Flow-Through Nacelle Wind Tunnel CFD

    NASA Technical Reports Server (NTRS)

    Schuh, Michael J.; Garcia, Joseph A.; Carter, Melissa B.; Deere, Karen A.; Tompkins, Daniel M.; Stremel, Paul M.

    2016-01-01

    Wind tunnel tests of a 5.75 scale model of the Boeing Hybrid Wing Body (HWB) configuration were conducted in the NASA Langley Research Center (LaRC) 14x22 and NASA Ames Research Center (ARC) 40x80 low speed wind tunnels as part of the NASA Environmentally Responsible Aviation (ERA) Project. Computational fluid dynamics (CFD) simulations of the flow-through nacelle (FTN) configuration of this model were performed before and after the testing. This paper presents a summary of the experimental and CFD results for the model in the cruise and landing configurations.

  19. Closed-Form 3-D Localization for Single Source in Uniform Circular Array with a Center Sensor

    NASA Astrophysics Data System (ADS)

    Bae, Eun-Hyon; Lee, Kyun-Kyung

    A novel closed-form algorithm is presented for estimating the 3-D location (azimuth angle, elevation angle, and range) of a single source in a uniform circular array (UCA) with a center sensor. Based on the centrosymmetry of the UCA and noncircularity of the source, the proposed algorithm decouples and estimates the 2-D direction of arrival (DOA), i.e. azimuth and elevation angles, and then estimates the range of the source. Notwithstanding a low computational complexity, the proposed algorithm provides an estimation performance close to that of the benchmark estimator 3-D MUSIC.

  20. Correction of downwash in wind tunnels of circular and elliptic sections

    NASA Technical Reports Server (NTRS)

    Lotz, Irmgard

    1936-01-01

    The downwash velocity distribution behind the wing was determined for the free jet and for the closed tunnel of both circular and elliptic cross sections. The wing was placed at the center of the tunnel. The theory makes it possible to determine the downwash at any point in the jet. The computations were performed for points in the plane determined by the jet axis and the center-of-pressure line of the wing. The downwash proved to be proportional to the wing lift and inversely proportional to the cross-sectional area of the tunnel.

  1. Center for Building Science: Annual report, FY 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cairns, E.J.; Rosenfeld, A.H.

    1987-05-01

    The Center for Building Science consists of four programs in the Applied Science Division: energy analysis, buildings energy systems, windows and lighting, and indoor environment. It was established to provide an umbrella so that goups in different programs but with similar interests could combine to perform joint research, develop new research areas, share resources, and produce joint publications. As detailed below, potential savings for the U.S. society from energy efficient buildings are enormous. But these savings can only be realized through an expanding federal RandD program that develops expertise in this new area. The Center for Building Science develops efficientmore » new building componenets, computer models, data and information systems, and trains needed builidng scientists. 135 refs., 72 figs., 18 tabs.« less

  2. CAROLINA CENTER FOR COMPUTATIONAL TOXICOLOGY

    EPA Science Inventory

    The Center will advance the field of computational toxicology through the development of new methods and tools, as well as through collaborative efforts. In each Project, new computer-based models will be developed and published that represent the state-of-the-art. The tools p...

  3. Computer classes and games in virtual reality environment to reduce loneliness among students of an elderly reference center

    PubMed Central

    Antunes, Thaiany Pedrozo Campos; de Oliveira, Acary Souza Bulle; Crocetta, Tania Brusque; Antão, Jennifer Yohanna Ferreira de Lima; Barbosa, Renata Thais de Almeida; Guarnieri, Regiani; Massetti, Thais; Monteiro, Carlos Bandeira de Mello; de Abreu, Luiz Carlos

    2017-01-01

    Abstract Introduction: Physical and mental changes associated with aging commonly lead to a decrease in communication capacity, reducing social interactions and increasing loneliness. Computer classes for older adults make significant contributions to social and cognitive aspects of aging. Games in a virtual reality (VR) environment stimulate the practice of communicative and cognitive skills and might also bring benefits to older adults. Furthermore, it might help to initiate their contact to the modern technology. The purpose of this study protocol is to evaluate the effects of practicing VR games during computer classes on the level of loneliness of students of an elderly reference center. Methods and Analysis: This study will be a prospective longitudinal study with a randomised cross-over design, with subjects aged 50 years and older, of both genders, spontaneously enrolled in computer classes for beginners. Data collection will be done in 3 moments: moment 0 (T0) – at baseline; moment 1 (T1) – after 8 typical computer classes; and moment 2 (T2) – after 8 computer classes which include 15 minutes for practicing games in VR environment. A characterization questionnaire, the short version of the Short Social and Emotional Loneliness Scale for Adults (SELSA-S) and 3 games with VR (Random, MoviLetrando, and Reaction Time) will be used. For the intervention phase 4 other games will be used: Coincident Timing, Motor Skill Analyser, Labyrinth, and Fitts. The statistical analysis will compare the evolution in loneliness perception, performance, and reaction time during the practice of the games between the 3 moments of data collection. Performance and reaction time during the practice of the games will also be correlated to the loneliness perception. Ethics and Dissemination: The protocol is approved by the host institution's ethics committee under the number 52305215.3.0000.0082. Results will be disseminated via peer-reviewed journal articles and conferences. This clinical trial is registered at ClinicalTrials.gov identifier: NCT02798081. PMID:28272198

  4. Computer classes and games in virtual reality environment to reduce loneliness among students of an elderly reference center: Study protocol for a randomised cross-over design.

    PubMed

    Antunes, Thaiany Pedrozo Campos; Oliveira, Acary Souza Bulle de; Crocetta, Tania Brusque; Antão, Jennifer Yohanna Ferreira de Lima; Barbosa, Renata Thais de Almeida; Guarnieri, Regiani; Massetti, Thais; Monteiro, Carlos Bandeira de Mello; Abreu, Luiz Carlos de

    2017-03-01

    Physical and mental changes associated with aging commonly lead to a decrease in communication capacity, reducing social interactions and increasing loneliness. Computer classes for older adults make significant contributions to social and cognitive aspects of aging. Games in a virtual reality (VR) environment stimulate the practice of communicative and cognitive skills and might also bring benefits to older adults. Furthermore, it might help to initiate their contact to the modern technology. The purpose of this study protocol is to evaluate the effects of practicing VR games during computer classes on the level of loneliness of students of an elderly reference center. This study will be a prospective longitudinal study with a randomised cross-over design, with subjects aged 50 years and older, of both genders, spontaneously enrolled in computer classes for beginners. Data collection will be done in 3 moments: moment 0 (T0) - at baseline; moment 1 (T1) - after 8 typical computer classes; and moment 2 (T2) - after 8 computer classes which include 15 minutes for practicing games in VR environment. A characterization questionnaire, the short version of the Short Social and Emotional Loneliness Scale for Adults (SELSA-S) and 3 games with VR (Random, MoviLetrando, and Reaction Time) will be used. For the intervention phase 4 other games will be used: Coincident Timing, Motor Skill Analyser, Labyrinth, and Fitts. The statistical analysis will compare the evolution in loneliness perception, performance, and reaction time during the practice of the games between the 3 moments of data collection. Performance and reaction time during the practice of the games will also be correlated to the loneliness perception. The protocol is approved by the host institution's ethics committee under the number 52305215.3.0000.0082. Results will be disseminated via peer-reviewed journal articles and conferences. This clinical trial is registered at ClinicalTrials.gov identifier: NCT02798081.

  5. An effective rectification method for lenselet-based plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Cao, Yiwei; Cai, Weijia; Zheng, Wanlu; Zhou, Ping

    2016-10-01

    The Lenselet-Based Plenoptic has recently drawn a lot of attention in the field of computational photography. The additional information inherent in light field allows a wide range of applications, but some preliminary processing of the raw image is necessary before further operations. In this paper, an effective method is presented for the rotation rectification of the raw image. The rotation is caused by imperfectly position of micro-lens array relative to the sensor plane in commercially available Lytro plenoptic cameras. The key to our method is locating the center of each microlens image, which is projected by a micro-lens. Because of vignetting, the pixel values at centers of the micro-lens image are higher than those at the peripheries. A mask is applied to probe the micro-lens image to locate the center area by finding the local maximum response. The error of the center coordinate estimate is corrected and the angle of rotation is computed via a subsequent line fitting. The algorithm is performed on two images captured by different Lytro cameras. The angles of rotation are -0.3600° and -0.0621° respectively and the rectified raw image is useful and reliable for further operations, such as extraction of the sub-aperture images. The experimental results demonstrate that our method is efficient and accurate.

  6. Scientific Grid activities and PKI deployment in the Cybermedia Center, Osaka University.

    PubMed

    Akiyama, Toyokazu; Teranishi, Yuuichi; Nozaki, Kazunori; Kato, Seiichi; Shimojo, Shinji; Peltier, Steven T; Lin, Abel; Molina, Tomas; Yang, George; Lee, David; Ellisman, Mark; Naito, Sei; Koike, Atsushi; Matsumoto, Shuichi; Yoshida, Kiyokazu; Mori, Hirotaro

    2005-10-01

    The Cybermedia Center (CMC), Osaka University, is a research institution that offers knowledge and technology resources obtained from advanced researches in the areas of large-scale computation, information and communication, multimedia content and education. Currently, CMC is involved in Japanese national Grid projects such as JGN II (Japan Gigabit Network), NAREGI and BioGrid. Not limited to Japan, CMC also actively takes part in international activities such as PRAGMA. In these projects and international collaborations, CMC has developed a Grid system that allows scientists to perform their analysis by remote-controlling the world's largest ultra-high voltage electron microscope located in Osaka University. In another undertaking, CMC has assumed a leadership role in BioGrid by sharing its experiences and knowledge on the system development for the area of biology. In this paper, we will give an overview of the BioGrid project and introduce the progress of the Telescience unit, which collaborates with the Telescience Project led by the National Center for Microscopy and Imaging Research (NCMIR). Furthermore, CMC collaborates with seven Computing Centers in Japan, NAREGI and National Institute of Informatics to deploy PKI base authentication infrastructure. The current status of this project and future collaboration with Grid Projects will be delineated in this paper.

  7. Modeling Materials: Design for Planetary Entry, Electric Aircraft, and Beyond

    NASA Technical Reports Server (NTRS)

    Thompson, Alexander; Lawson, John W.

    2014-01-01

    NASA missions push the limits of what is possible. The development of high-performance materials must keep pace with the agency's demanding, cutting-edge applications. Researchers at NASA's Ames Research Center are performing multiscale computational modeling to accelerate development times and further the design of next-generation aerospace materials. Multiscale modeling combines several computationally intensive techniques ranging from the atomic level to the macroscale, passing output from one level as input to the next level. These methods are applicable to a wide variety of materials systems. For example: (a) Ultra-high-temperature ceramics for hypersonic aircraft-we utilized the full range of multiscale modeling to characterize thermal protection materials for faster, safer air- and spacecraft, (b) Planetary entry heat shields for space vehicles-we computed thermal and mechanical properties of ablative composites by combining several methods, from atomistic simulations to macroscale computations, (c) Advanced batteries for electric aircraft-we performed large-scale molecular dynamics simulations of advanced electrolytes for ultra-high-energy capacity batteries to enable long-distance electric aircraft service; and (d) Shape-memory alloys for high-efficiency aircraft-we used high-fidelity electronic structure calculations to determine phase diagrams in shape-memory transformations. Advances in high-performance computing have been critical to the development of multiscale materials modeling. We used nearly one million processor hours on NASA's Pleiades supercomputer to characterize electrolytes with a fidelity that would be otherwise impossible. For this and other projects, Pleiades enables us to push the physics and accuracy of our calculations to new levels.

  8. Greenberger-Horne-Zeilinger states-based blind quantum computation with entanglement concentration.

    PubMed

    Zhang, Xiaoqian; Weng, Jian; Lu, Wei; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing

    2017-09-11

    In blind quantum computation (BQC) protocol, the quantum computability of servers are complicated and powerful, while the clients are not. It is still a challenge for clients to delegate quantum computation to servers and keep the clients' inputs, outputs and algorithms private. Unfortunately, quantum channel noise is unavoidable in the practical transmission. In this paper, a novel BQC protocol based on maximally entangled Greenberger-Horne-Zeilinger (GHZ) states is proposed which doesn't need a trusted center. The protocol includes a client and two servers, where the client only needs to own quantum channels with two servers who have full-advantage quantum computers. Two servers perform entanglement concentration used to remove the noise, where the success probability can almost reach 100% in theory. But they learn nothing in the process of concentration because of the no-signaling principle, so this BQC protocol is secure and feasible.

  9. Quantum Information Theory - an Invitation

    NASA Astrophysics Data System (ADS)

    Werner, Reinhard F.

    Quantum information and quantum computers have received a lot of public attention recently. Quantum computers have been advertised as a kind of warp drive for computing, and indeed the promise of the algorithms of Shor and Grover is to perform computations which are extremely hard or even provably impossible on any merely ``classical'' computer.In this article I shall give an account of the basic concepts of quantum information theory is given, staying as much as possible in the area of general agreement.The article is divided into two parts. The first (up to the end of Sect. 2.5) is mostly in plain English, centered around the exploration of what can or cannot be done with quantum systems as information carriers. The second part, Sect. 2.6, then gives a description of the mathematical structures and of some of the tools needed to develop the theory.

  10. Computer Programs (Turbomachinery)

    NASA Technical Reports Server (NTRS)

    1978-01-01

    NASA computer programs are extensively used in design of industrial equipment. Available from the Computer Software Management and Information Center (COSMIC) at the University of Georgia, these programs are employed as analysis tools in design, test and development processes, providing savings in time and money. For example, two NASA computer programs are used daily in the design of turbomachinery by Delaval Turbine Division, Trenton, New Jersey. The company uses the NASA splint interpolation routine for analysis of turbine blade vibration and the performance of compressors and condensers. A second program, the NASA print plot routine, analyzes turbine rotor response and produces graphs for project reports. The photos show examples of Delaval test operations in which the computer programs play a part. In the large photo below, a 24-inch turbine blade is undergoing test; in the smaller photo, a steam turbine rotor is being prepared for stress measurements under actual operating conditions; the "spaghetti" is wiring for test instrumentation

  11. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diachin, L F; Garaizar, F X; Henson, V E

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less

  12. Management and Analysis of Biological and Clinical Data: How Computer Science May Support Biomedical and Clinical Research

    NASA Astrophysics Data System (ADS)

    Veltri, Pierangelo

    The use of computer based solutions for data management in biology and clinical science has contributed to improve life-quality and also to gather research results in shorter time. Indeed, new algorithms and high performance computation have been using in proteomics and genomics studies for curing chronic diseases (e.g., drug designing) as well as supporting clinicians both in diagnosis (e.g., images-based diagnosis) and patient curing (e.g., computer based information analysis on information gathered from patient). In this paper we survey on examples of computer based techniques applied in both biology and clinical contexts. The reported applications are also results of experiences in real case applications at University Medical School of Catanzaro and also part of experiences of the National project Staywell SH 2.0 involving many research centers and companies aiming to study and improve citizen wellness.

  13. Air Flow Modeling in the Wind Tunnel of the FHWA Aerodynamics Laboratory at Turner-Fairbank Highway Research Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitek, M. A.; Lottes, S. A.; Bojanowski, C.

    Computational fluid dynamics (CFD) modeling is widely used in industry for design and in the research community to support, compliment, and extend the scope of experimental studies. Analysis of transportation infrastructure using high performance cluster computing with CFD and structural mechanics software is done at the Transportation Research and Analysis Computing Center (TRACC) at Argonne National Laboratory. These resources, available at TRACC, were used to perform advanced three-dimensional computational simulations of the wind tunnel laboratory at the Turner-Fairbank Highway Research Center (TFHRC). The goals were to verify the CFD model of the laboratory wind tunnel and then to use versionsmore » of the model to provide the capability to (1) perform larger parametric series of tests than can be easily done in the laboratory with available budget and time, (2) to extend testing to wind speeds that cannot be achieved in the laboratory, and (3) to run types of tests that are very difficult or impossible to run in the laboratory. Modern CFD software has many physics models and domain meshing options. Models, including the choice of turbulence and other physics models and settings, the computational mesh, and the solver settings, need to be validated against measurements to verify that the results are sufficiently accurate for use in engineering applications. The wind tunnel model was built and tested, by comparing to experimental measurements, to provide a valuable tool to perform these types of studies in the future as a complement and extension to TFHRC’s experimental capabilities. Wind tunnel testing at TFHRC is conducted in a subsonic open-jet wind tunnel with a 1.83 m (6 foot) by 1.83 m (6 foot) cross section. A three component dual force-balance system is used to measure forces acting on tested models, and a three degree of freedom suspension system is used for dynamic response tests. Pictures of the room are shown in Figure 1-1 to Figure 1-4. A detailed CAD geometry and CFD model of the wind tunnel laboratory at TFHRC was built and tested. Results were compared against experimental wind velocity measurements at a large number of locations around the room. This testing included an assessment of the air flow uniformity provided by the tunnel to the test zone and assessment of room geometry effects, such as influence of the proximity the room walls, the non-symmetrical position of the tunnel in the room, and the influence of the room setup on the air flow in the room. This information is useful both for simplifying the computational model and in deciding whether or not moving, or removing, some of the furniture or other movable objects in the room will change the flow in the test zone.« less

  14. Data Transfer Study HPSS Archiving

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, James; Parete-Koon, Suzanne T; Mitchell, Quinn

    2015-01-01

    The movement of the large amounts of data produced by codes run in a High Performance Computing (HPC) environment can be a bottleneck for project workflows. To balance filesystem capacity and performance requirements, HPC centers enforce data management policies to purge old files to make room for new computation and analysis results. Users at Oak Ridge Leadership Computing Facility (OLCF) and many other HPC user facilities must archive data to avoid data loss during purges, therefore the time associated with data movement for archiving is something that all users must consider. This study observed the difference in transfer speed frommore » the originating location on the Lustre filesystem to the more permanent High Performance Storage System (HPSS). The tests were done with a number of different transfer methods for files that spanned a variety of sizes and compositions that reflect OLCF user data. This data will be used to help users of Titan and other Cray supercomputers plan their workflow and data transfers so that they are most efficient for their project. We will also discuss best practice for maintaining data at shared user facilities.« less

  15. Big Data Processing for a Central Texas Groundwater Case Study

    NASA Astrophysics Data System (ADS)

    Cantu, A.; Rivera, O.; Martínez, A.; Lewis, D. H.; Gentle, J. N., Jr.; Fuentes, G.; Pierce, S. A.

    2016-12-01

    As computational methods improve, scientists are able to expand the level and scale of experimental simulation and testing that is completed for case studies. This study presents a comparative analysis of multiple models for the Barton Springs segment of the Edwards aquifer. Several numerical simulations using state-mandated MODFLOW models ran on Stampede, a High Performance Computing system housed at the Texas Advanced Computing Center, were performed for multiple scenario testing. One goal of this multidisciplinary project aims to visualize and compare the output data of the groundwater model using the statistical programming language R to find revealing data patterns produced by different pumping scenarios. Presenting data in a friendly post-processing format is covered in this paper. Visualization of the data and creating workflows applicable to the management of the data are tasks performed after data extraction. Resulting analyses provide an example of how supercomputing can be used to accelerate evaluation of scientific uncertainty and geological knowledge in relation to policy and management decisions. Understanding the aquifer behavior helps policy makers avoid negative impact on the endangered species, environmental services and aids in maximizing the aquifer yield.

  16. Unsteady Aero Computation of a 1 1/2 Stage Large Scale Rotating Turbine

    NASA Technical Reports Server (NTRS)

    To, Wai-Ming

    2012-01-01

    This report is the documentation of the work performed for the Subsonic Rotary Wing Project under the NASA s Fundamental Aeronautics Program. It was funded through Task Number NNC10E420T under GESS-2 Contract NNC06BA07B in the period of 10/1/2010 to 8/31/2011. The objective of the task is to provide support for the development of variable speed power turbine technology through application of computational fluid dynamics analyses. This includes work elements in mesh generation, multistage URANS simulations, and post-processing of the simulation results for comparison with the experimental data. The unsteady CFD calculations were performed with the TURBO code running in multistage single passage (phase lag) mode. Meshes for the blade rows were generated with the NASA developed TCGRID code. The CFD performance is assessed and improvements are recommended for future research in this area. For that, the United Technologies Research Center's 1 1/2 stage Large Scale Rotating Turbine was selected to be the candidate engine configuration for this computational effort because of the completeness and availability of the data.

  17. An analysis for high speed propeller-nacelle aerodynamic performance prediction. Volume 1: Theory and application

    NASA Technical Reports Server (NTRS)

    Egolf, T. Alan; Anderson, Olof L.; Edwards, David E.; Landgrebe, Anton J.

    1988-01-01

    A computer program, the Propeller Nacelle Aerodynamic Performance Prediction Analysis (PANPER), was developed for the prediction and analysis of the performance and airflow of propeller-nacelle configurations operating over a forward speed range inclusive of high speed flight typical of recent propfan designs. A propeller lifting line, wake program was combined with a compressible, viscous center body interaction program, originally developed for diffusers, to compute the propeller-nacelle flow field, blade loading distribution, propeller performance, and the nacelle forebody pressure and viscous drag distributions. The computer analysis is applicable to single and coaxial counterrotating propellers. The blade geometries can include spanwise variations in sweep, droop, taper, thickness, and airfoil section type. In the coaxial mode of operation the analysis can treat both equal and unequal blade number and rotational speeds on the propeller disks. The nacelle portion of the analysis can treat both free air and tunnel wall configurations including wall bleed. The analysis was applied to many different sets of flight conditions using selected aerodynamic modeling options. The influence of different propeller nacelle-tunnel wall configurations was studied. Comparisons with available test data for both single and coaxial propeller configurations are presented along with a discussion of the results.

  18. Adaptation of a Control Center Development Environment for Industrial Process Control

    NASA Technical Reports Server (NTRS)

    Killough, Ronnie L.; Malik, James M.

    1994-01-01

    In the control center, raw telemetry data is received for storage, display, and analysis. This raw data must be combined and manipulated in various ways by mathematical computations to facilitate analysis, provide diversified fault detection mechanisms, and enhance display readability. A development tool called the Graphical Computation Builder (GCB) has been implemented which provides flight controllers with the capability to implement computations for use in the control center. The GCB provides a language that contains both general programming constructs and language elements specifically tailored for the control center environment. The GCB concept allows staff who are not skilled in computer programming to author and maintain computer programs. The GCB user is isolated from the details of external subsystem interfaces and has access to high-level functions such as matrix operators, trigonometric functions, and unit conversion macros. The GCB provides a high level of feedback during computation development that improves upon the often cryptic errors produced by computer language compilers. An equivalent need can be identified in the industrial data acquisition and process control domain: that of an integrated graphical development tool tailored to the application to hide the operating system, computer language, and data acquisition interface details. The GCB features a modular design which makes it suitable for technology transfer without significant rework. Control center-specific language elements can be replaced by elements specific to industrial process control.

  19. Vehicle Design Evaluation Program (VDEP). A computer program for weight sizing, economic, performance and mission analysis of fuel-conservative aircraft, multibodied aircraft and large cargo aircraft using both JP and alternative fuels

    NASA Technical Reports Server (NTRS)

    Oman, B. H.

    1977-01-01

    The NASA Langley Research Center vehicle design evaluation program (VDEP-2) was expanded by (1) incorporating into the program a capability to conduct preliminary design studies on subsonic commercial transport type aircraft using both JP and such alternate fuels as hydrogen and methane;(2) incorporating an aircraft detailed mission and performance analysis capability; and (3) developing and incorporating an external loads analysis capability. The resulting computer program (VDEP-3) provides a preliminary design tool that enables the user to perform integrated sizing, structural analysis, and cost studies on subsonic commercial transport aircraft. Both versions of the VDEP-3 Program which are designated preliminary Analysis VDEP-3 and detailed Analysis VDEP utilize the same vehicle sizing subprogram which includes a detailed mission analysis capability, as well as a geometry and weight analysis for multibodied configurations.

  20. A Computational/Experimental Study of Two Optimized Supersonic Transport Designs and the Reference H Baseline

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.

    1999-01-01

    Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.

  1. Closed-Loop HIRF Experiments Performed on a Fault Tolerant Flight Control Computer

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1997-01-01

    ABSTRACT Closed-loop HIRF experiments were performed on a fault tolerant flight control computer (FCC) at the NASA Langley Research Center. The FCC used in the experiments was a quad-redundant flight control computer executing B737 Autoland control laws. The FCC was placed in one of the mode-stirred reverberation chambers in the HIRF Laboratory and interfaced to a computer simulation of the B737 flight dynamics, engines, sensors, actuators, and atmosphere in the Closed-Loop Systems Laboratory. Disturbances to the aircraft associated with wind gusts and turbulence were simulated during tests. Electrical isolation between the FCC under test and the simulation computer was achieved via a fiber optic interface for the analog and discrete signals. Closed-loop operation of the FCC enabled flight dynamics and atmospheric disturbances affecting the aircraft to be represented during tests. Upset was induced in the FCC as a result of exposure to HIRF, and the effect of upset on the simulated flight of the aircraft was observed and recorded. This paper presents a description of these closed- loop HIRF experiments, upset data obtained from the FCC during these experiments, and closed-loop effects on the simulated flight of the aircraft.

  2. Danger zone analysis using cone beam computed tomography after apical enlargement with K3 and K3XF in a manikin model

    PubMed Central

    Olivier, Juan-Gonzalo; García-Font, Marc; Gonzalez-Sanchez, Jose-Antonio; Roig-Cayon, Miguel

    2016-01-01

    Background The objective of the study was to evaluate and compare how apical enlargement with K3 and K3XF nickel-titanium (NiTi) rotary instruments reduces the root thickness in the danger zone and affects canal transportation and centering ability in mandibular molar mesial canals in a manikin extracted tooth model. Material and Methods Seventy-two mesial root canals of first mandibular molars were instrumented. Initial and post-instrumentation Cone Beam Computed Tomography scans were performed after root canal preparation up to size 25, 30, 35 and 40 files. Canal transportation, canal centering and remaining root dentin thickness toward the danger zone were calculated in sections 1, 2 and 3 mm under the furcation level. Data were analyzed using non-parametric Kruskal-Wallis analysis of variance at a significance level of P < 0.05. Results K3 instruments removed more dentin toward the danger zone compared with K3XF instruments (P< .05) and significant differences in dentin thickness were found when canal enlargement was performed to a #35-40 with both systems (P< 0.05). No significant differences in canal transportation and centering ability were found between systems, except when canal enlargement was performed to a #40 (P = 0,0136). No differences were observed when comparing the number of uses in both systems (P> 0.05). Conclusions Under the conditions of this study K3 removed a significant amount of dentin at the furcation level compared with the R-Phase K3XF rotary system in curved root canals. Enlargement to a 35-40/04 file removed significantly more dentin with both systems. Key words:K3, K3XF, R-phase, center ability, canal transportation, dentin thickness, increased apical enlargement, danger zone, dentin thickness. PMID:27703602

  3. On Laminar to Turbulent Transition of Arc-Jet Flow in the NASA Ames Panel Test Facility

    NASA Technical Reports Server (NTRS)

    Gokcen, Tahir; Alunni, Antonella I.

    2012-01-01

    This paper provides experimental evidence and supporting computational analysis to characterize the laminar to turbulent flow transition in a high enthalpy arc-jet facility at NASA Ames Research Center. The arc-jet test data obtained in the 20 MW Panel Test Facility include measurements of surface pressure and heat flux on a water-cooled calibration plate, and measurements of surface temperature on a reaction-cured glass coated tile plate. Computational fluid dynamics simulations are performed to characterize the arc-jet test environment and estimate its parameters consistent with the facility and calibration measurements. The present analysis comprises simulations of the nonequilibrium flowfield in the facility nozzle, test box, and flowfield over test articles. Both laminar and turbulent simulations are performed, and the computed results are compared with the experimental measurements, including Stanton number dependence on Reynolds number. Comparisons of computed and measured surface heat fluxes (and temperatures), along with the accompanying analysis, confirm that that the boundary layer in the Panel Test Facility flow is transitional at certain archeater conditions.

  4. Overview of High-Fidelity Modeling Activities in the Numerical Propulsion System Simulations (NPSS) Project

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    2002-01-01

    A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.

  5. Feasibility study for the implementation of NASTRAN on the ILLIAC 4 parallel processor

    NASA Technical Reports Server (NTRS)

    Field, E. I.

    1975-01-01

    The ILLIAC IV, a fourth generation multiprocessor using parallel processing hardware concepts, is operational at Moffett Field, California. Its capability to excel at matrix manipulation, makes the ILLIAC well suited for performing structural analyses using the finite element displacement method. The feasibility of modifying the NASTRAN (NASA structural analysis) computer program to make effective use of the ILLIAC IV was investigated. The characteristics are summarized of the ILLIAC and the ARPANET, a telecommunications network which spans the continent making the ILLIAC accessible to nearly all major industrial centers in the United States. Two distinct approaches are studied: retaining NASTRAN as it now operates on many of the host computers of the ARPANET to process the input and output while using the ILLIAC only for the major computational tasks, and installing NASTRAN to operate entirely in the ILLIAC environment. Though both alternatives offer similar and significant increases in computational speed over modern third generation processors, the full installation of NASTRAN on the ILLIAC is recommended. Specifications are presented for performing that task with manpower estimates and schedules to correspond.

  6. Computers and Media Centers--A Winning Combination.

    ERIC Educational Resources Information Center

    Graf, Nancy

    1984-01-01

    Profile of the computer program offered by the library/media center at Chief Joseph Junior High School in Richland, Washington, highlights program background, operator's licensing procedure, the trainer license, assistance from high school students, need for more computers, handling of software, and helpful hints. (EJS)

  7. For operation of the Computer Software Management and Information Center (COSMIC)

    NASA Technical Reports Server (NTRS)

    Carmon, J. L.

    1983-01-01

    During the month of June, the Survey Research Center (SRC) at the University of Georgia designed new benefits questionnaires for computer software management and information center (COSMIC). As a test of their utility, these questionnaires are now used in the benefits identification process.

  8. Performance, Accuracy, Data Delivery, and Feedback Methods in Order Selection: A Comparison of Voice, Handheld, and Paper Technologies

    ERIC Educational Resources Information Center

    Ludwig, Timothy D.; Goomas, David T.

    2007-01-01

    Field study was conducted in auto-parts after-market distribution centers where selectors used handheld computers to receive instructions and feedback about their product selection process. A wireless voice-interaction technology was then implemented in a multiple baseline fashion across three departments of a warehouse (N = 14) and was associated…

  9. Communications network design and costing model users manual

    NASA Technical Reports Server (NTRS)

    Logan, K. P.; Somes, S. S.; Clark, C. A.

    1983-01-01

    The information and procedures needed to exercise the communications network design and costing model for performing network analysis are presented. Specific procedures are included for executing the model on the NASA Lewis Research Center IBM 3033 computer. The concepts, functions, and data bases relating to the model are described. Model parameters and their format specifications for running the model are detailed.

  10. AHPCRC - Army High Performance Computing Research Center

    DTIC Science & Technology

    2008-01-01

    University) Birds and insects use complex flapping and twisting wing motions to maneuver, hover, avoid obstacles, and maintain or regain their...vehicles for use in sensing, surveillance, and wireless communications. HPC simulations examine plunging, pitching, and twisting motions of aeroelastic...wings, to optimize the amplitudes and frequencies of flapping and twisting motions for the maximum amount of thrust. Several methods of calculation

  11. High-Performance Computing Data Center Cooling System Energy Efficiency |

    Science.gov Websites

    approaches involve a cooling distribution unit (CDU) (2), which interfaces with the facility cooling loop and to the energy recovery water (ERW) loop (5), which is a closed-loop system. There are three heat rejection options for this IT load: When possible, heat energy from the energy recovery loop is transferred

  12. Academic Growth Expectations for Students with Emotional and Behavior Disorders

    ERIC Educational Resources Information Center

    Ysseldyke, Jim; Scerra, Carmine; Stickney, Eric; Beckler, Amanda; Dituri, Joan; Ellis, Karen

    2017-01-01

    Computer adaptive assessments were used to monitor the academic status and growth of students with emotional behavior disorders (EBD) in reading (N = 321) and math (N = 322) in a regional service center serving 56 school districts. A cohort sequential model was used to compare that performance to the status and growth of a national user base of…

  13. Self-Paced Instruction in a Cognitively Oriented Skills Course: Supplyman, MOS 76Y10. Technical Report No. 75-20.

    ERIC Educational Resources Information Center

    Hungerland, Jacklyn E.; Taylor, John E.

    As part of the Army's adoption of performance-oriented instruction in Army Training centers, a study was conducted to determine the feasibility of using sefl-paced instruction without programed texts in a clerical and computational skills course. Course organization, course management, and effective instructional techniques for self-paced training…

  14. Using Eye Tracking as a Tool to Teach Informatics Students the Importance of User Centered Design

    ERIC Educational Resources Information Center

    Gelderblom, Helene; Adebesin, Funmi; Brosens, Jacques; Kruger, Rendani

    2017-01-01

    In this article the authors describe how they incorporate eye tracking in a human-computer interaction (HCI) course that forms part of a postgraduate Informatics degree. The focus is on an eye tracking assignment that involves student groups performing usability evaluation studies for real world clients. Over the past three years the authors have…

  15. Web-Based Testing: Exploring the Relationship between Hardware Usability and Test Performance

    ERIC Educational Resources Information Center

    Huff, Kyle; Cline, Melinda; Guynes, Carl S.

    2012-01-01

    Web-based testing has recently become common in both academic and professional settings. A web-based test is administered through a web browser. Individuals may complete a web-based test at nearly any time and at any place. In addition, almost any computer lab can become a testing center. It is important to understand the environmental issues that…

  16. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology

    PubMed Central

    Zao, John K.; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system. PMID:24917804

  17. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology.

    PubMed

    Zao, John K; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system.

  18. Program management model study

    NASA Technical Reports Server (NTRS)

    Connelly, J. J.; Russell, J. E.; Seline, J. R.; Sumner, N. R., Jr.

    1972-01-01

    Two models, a system performance model and a program assessment model, have been developed to assist NASA management in the evaluation of development alternatives for the Earth Observations Program. Two computer models were developed and demonstrated on the Goddard Space Flight Center Computer Facility. Procedures have been outlined to guide the user of the models through specific evaluation processes, and the preparation of inputs describing earth observation needs and earth observation technology. These models are intended to assist NASA in increasing the effectiveness of the overall Earth Observation Program by providing a broader view of system and program development alternatives.

  19. Smart Computer-Assisted Markets

    NASA Astrophysics Data System (ADS)

    McCabe, Kevin A.; Rassenti, Stephen J.; Smith, Vernon L.

    1991-10-01

    The deregulation movement has motivated the experimental study of auction markets designed for interdependent network industries such as natural gas pipelines or electric power systems. Decentralized agents submit bids to buy commodity and offers to sell transportation and commodity to a computerized dispatch center. Computer algorithms determine prices and allocations that maximize the gains from exchange in the system relative to the submitted bids and offers. The problem is important, because traditionally the scale and coordination economies in such industries were thought to require regulation. Laboratory experiments are used to study feasibility, limitations, incentives, and performance of proposed market designs for deregulation, providing motivation for new theory.

  20. Wood Products Analysis

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Structural Reliability Consultants' computer program creates graphic plots showing the statistical parameters of glue laminated timbers, or 'glulam.' The company president, Dr. Joseph Murphy, read in NASA Tech Briefs about work related to analysis of Space Shuttle surface tile strength performed for Johnson Space Center by Rockwell International Corporation. Analysis led to a theory of 'consistent tolerance bounds' for statistical distributions, applicable in industrial testing where statistical analysis can influence product development and use. Dr. Murphy then obtained the Tech Support Package that covers the subject in greater detail. The TSP became the basis for Dr. Murphy's computer program PC-DATA, which he is marketing commercially.

  1. Reinventing patient-centered computing for the twenty-first century.

    PubMed

    Goldberg, H S; Morales, A; Gottlieb, L; Meador, L; Safran, C

    2001-01-01

    Despite evidence over the past decade that patients like and will use patient-centered computing systems in managing their health, patients have remained forgotten stakeholders in advances in clinical computing systems. We present a framework for patient empowerment and the technical realization of that framework in an architecture called CareLink. In an evaluation of the initial deployment of CareLink in the support of neonatal intensive care, we have demonstrated a reduction in the length of stay for very-low birthweight infants, and an improvement in family satisfaction with care delivery. With the ubiquitous adoption of the Internet into the general culture, patient-centered computing provides the opportunity to mend broken health care relationships and reconnect patients to the care delivery process. CareLink itself provides functionality to support both clinical care and research, and provides a living laboratory for the further study of patient-centered computing.

  2. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.

  3. Phase quality map based on local multi-unwrapped results for two-dimensional phase unwrapping.

    PubMed

    Zhong, Heping; Tang, Jinsong; Zhang, Sen

    2015-02-01

    The efficiency of a phase unwrapping algorithm and the reliability of the corresponding unwrapped result are two key problems in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) or interferometric synthetic aperture sonar (InSAS) data. In this paper, a new phase quality map is designed and implemented in a graphic processing unit (GPU) environment, which greatly accelerates the unwrapping process of the quality-guided algorithm and enhances the correctness of the unwrapped result. In a local wrapped phase window, the center point is selected as the reference point, and then two unwrapped results are computed by integrating in two different simple ways. After the two local unwrapped results are computed, the total difference of the two unwrapped results is regarded as the phase quality value of the center point. In order to accelerate the computing process of the new proposed quality map, we have implemented it in a GPU environment. The wrapped phase data are first uploaded to the memory of a device, and then the kernel function is called in the device to compute the phase quality in parallel by blocks of threads. Unwrapping tests performed on the simulated and real InSAS data confirm the accuracy and efficiency of the proposed method.

  4. Survey on gynecological cancer treatment by Piedmont, Liguria, and Valle d'Aosta group of AIRO (Italian Association of Radiation Oncology).

    PubMed

    Cattari, Gabriella; Delmastro, Elena; Bresciani, Sara; Gribaudo, Sergio; Melano, Antonella; Giannelli, Flavio; Tessa, Maria; Chiarlone, Renato; Scolaro, Tindaro; Krengli, Marco; Urgesi, Alessandro; Gabriele, Pietro

    2016-04-01

    We focused the attention on radiation therapy practices about the gynecological malignancies in Piedmont, Liguria, and Valle d'Aosta to know the current treatment practice and to improve the quality of care. We proposed a cognitive survey to evaluate the standard practice patterns for gynecological cancer management, adopted from 2012 to 2014 by radiotherapy (RT) centers with a large amount of gynecological cancer cases. There were three topics: 1. Taking care and multidisciplinary approach, 2. Radiotherapy treatment and brachytherapy, 3. Follow-up. Nineteen centers treated gynecological malignancies and 12 of these had a multidisciplinary dedicated team. Radiotherapy option has been used in all clinical setting: definitive, adjuvant, and palliative. In general, 1978 patients were treated. There were 834 brachytherapy (BRT) treatments. The fusion between diagnostic imaging (magnetic resonance imaging - MRI, positron emission tomography - PET) and computed tomography (CT) simulation was used for contouring in all centers. Conformal RT and intensity modulated radiation therapy (IMRT) were the most frequent techniques. The image guided radiation therapy (IGRT) was used in 10/19 centers. There were 8 active BRT centers. Brachytherapy was performed both with radical intent and as boost, mostly by HDR (6/8 centers). The doses for exclusive BRT were between 20 to 30 Gy. The doses for BRT boost were between 10 and 20 Gy. Four centers used CT-MRI compatible applicators but only one used MRI for planning. The BRT plans on vaginal cuff were still performed on traditional radiographies in 2 centers. The plan sum was evaluated in only 1 center. Only 1 center performed in vivo dosimetry. In the last three years, multidisciplinary approach, contouring, treatment techniques, doses, and control systems were similar in Liguria-Piedmont and Valle d'Aosta. However, the technology implementation didn't translate in a real treatment innovation so far.

  5. Survey on gynecological cancer treatment by Piedmont, Liguria, and Valle d'Aosta group of AIRO (Italian Association of Radiation Oncology)

    PubMed Central

    Cattari, Gabriella; Delmastro, Elena; Bresciani, Sara; Gribaudo, Sergio; Melano, Antonella; Giannelli, Flavio; Tessa, Maria; Chiarlone, Renato; Scolaro, Tindaro; Krengli, Marco; Urgesi, Alessandro

    2016-01-01

    Purpose We focused the attention on radiation therapy practices about the gynecological malignancies in Piedmont, Liguria, and Valle d'Aosta to know the current treatment practice and to improve the quality of care. Material and methods We proposed a cognitive survey to evaluate the standard practice patterns for gynecological cancer management, adopted from 2012 to 2014 by radiotherapy (RT) centers with a large amount of gynecological cancer cases. There were three topics: 1. Taking care and multidisciplinary approach, 2. Radiotherapy treatment and brachytherapy, 3. Follow-up. Results Nineteen centers treated gynecological malignancies and 12 of these had a multidisciplinary dedicated team. Radiotherapy option has been used in all clinical setting: definitive, adjuvant, and palliative. In general, 1978 patients were treated. There were 834 brachytherapy (BRT) treatments. The fusion between diagnostic imaging (magnetic resonance imaging – MRI, positron emission tomography – PET) and computed tomography (CT) simulation was used for contouring in all centers. Conformal RT and intensity modulated radiation therapy (IMRT) were the most frequent techniques. The image guided radiation therapy (IGRT) was used in 10/19 centers. There were 8 active BRT centers. Brachytherapy was performed both with radical intent and as boost, mostly by HDR (6/8 centers). The doses for exclusive BRT were between 20 to 30 Gy. The doses for BRT boost were between 10 and 20 Gy. Four centers used CT-MRI compatible applicators but only one used MRI for planning. The BRT plans on vaginal cuff were still performed on traditional radiographies in 2 centers. The plan sum was evaluated in only 1 center. Only 1 center performed in vivo dosimetry. Conclusions In the last three years, multidisciplinary approach, contouring, treatment techniques, doses, and control systems were similar in Liguria-Piedmont and Valle d'Aosta. However, the technology implementation didn't translate in a real treatment innovation so far. PMID:27257417

  6. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratorymore » (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.« less

  8. Flowfield visualization for SSME hot gas manifold

    NASA Technical Reports Server (NTRS)

    Roger, Robert P.

    1988-01-01

    The objective of this research, as defined by NASA-Marshall Space Flight Center, was two-fold: (1) to numerically simulate viscous subsonic flow in a proposed elliptical two-duct version of the fuel side Hot Gas Manifold (HGM) for the Space Shuttle Main Engine (SSME), and (2) to provide analytical support for SSME related numerical computational experiments, being performed by the Computational Fluid Dynamics staff in the Aerophysics Division of the Structures and Dynamics Laboratory at NASA-MSFC. Numerical results of HGM were calculations to complement both water flow visualization experiments and air flow visualization experiments and air experiments in two-duct geometries performed at NASA-MSFC and Rocketdyne. In addition, code modification and improvement efforts were to strengthen the CFD capabilities of NASA-MSFC for producing reliable predictions of flow environments within the SSME.

  9. Opportunities and choice in a new vector era

    NASA Astrophysics Data System (ADS)

    Nowak, A.

    2014-06-01

    This work discusses the significant changes in computing landscape related to the progression of Moore's Law, and the implications on scientific computing. Particular attention is devoted to the High Energy Physics domain (HEP), which has always made good use of threading, but levels of parallelism closer to the hardware were often left underutilized. Findings of the CERN openlab Platform Competence Center are reported in the context of expanding "performance dimensions", and especially the resurgence of vectors. These suggest that data oriented designs are feasible in HEP and have considerable potential for performance improvements on multiple levels, but will rarely trump algorithmic enhancements. Finally, an analysis of upcoming hardware and software technologies identifies heterogeneity as a major challenge for software, which will require more emphasis on scalable, efficient design.

  10. Status and Trends in Networking at LHC Tier1 Facilities

    NASA Astrophysics Data System (ADS)

    Bobyshev, A.; DeMar, P.; Grigaliunas, V.; Bigrow, J.; Hoeft, B.; Reymund, A.

    2012-12-01

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: • Evolution of Tier1 centers to their current state • Evolving data center networking models and how they apply to Tier1 centers • Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers • Trends in WAN data movement and emergence of software-defined WAN network capabilities • Network virtualization

  11. Image Processor Electronics (IPE): The High-Performance Computing System for NASA SWIFT Mission

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang H.; Settles, Beverly A.

    2003-01-01

    Gamma Ray Bursts (GRBs) are believed to be the most powerful explosions that have occurred in the Universe since the Big Bang and are a mystery to the scientific community. Swift, a NASA mission that includes international participation, was designed and built in preparation for a 2003 launch to help to determine the origin of Gamma Ray Bursts. Locating the position in the sky where a burst originates requires intensive computing, because the duration of a GRB can range between a few milliseconds up to approximately a minute. The instrument data system must constantly accept multiple images representing large regions of the sky that are generated by sixteen gamma ray detectors operating in parallel. It then must process the received images very quickly in order to determine the existence of possible gamma ray bursts and their locations. The high-performance instrument data computing system that accomplishes this is called the Image Processor Electronics (IPE). The IPE was designed, built and tested by NASA Goddard Space Flight Center (GSFC) in order to meet these challenging requirements. The IPE is a small size, low power and high performing computing system for space applications. This paper addresses the system implementation and the system hardware architecture of the IPE. The paper concludes with the IPE system performance that was measured during end-to-end system testing.

  12. Mobile high-performance computing (HPC) for synthetic aperture radar signal processing

    NASA Astrophysics Data System (ADS)

    Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen

    2018-04-01

    The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.

  13. The development and use of a computer-interactive data acquisition and display system in a flight environment

    NASA Technical Reports Server (NTRS)

    Bever, G. A.

    1981-01-01

    The flight test data requirements at the NASA Dryden Flight Research Center increased in complexity, and more advanced instrumentation became necessary to accomplish mission goals. This paper describes the way in which an airborne computer was used to perform real-time calculations on critical flight test parameters during a flight test on a winglet-equipped KC-135A aircraft. With the computer, an airborne flight test engineer can select any sensor for airborne display in several formats, including engineering units. The computer is able to not only calculate values derived from the sensor outputs but also to interact with the data acquisition system. It can change the data cycle format and data rate, and even insert the derived values into the pulse code modulation (PCM) bit stream for recording.

  14. Automatic Generation of Overlays and Offset Values Based on Visiting Vehicle Telemetry and RWS Visuals

    NASA Technical Reports Server (NTRS)

    Dunne, Matthew J.

    2011-01-01

    The development of computer software as a tool to generate visual displays has led to an overall expansion of automated computer generated images in the aerospace industry. These visual overlays are generated by combining raw data with pre-existing data on the object or objects being analyzed on the screen. The National Aeronautics and Space Administration (NASA) uses this computer software to generate on-screen overlays when a Visiting Vehicle (VV) is berthing with the International Space Station (ISS). In order for Mission Control Center personnel to be a contributing factor in the VV berthing process, computer software similar to that on the ISS must be readily available on the ground to be used for analysis. In addition, this software must perform engineering calculations and save data for further analysis.

  15. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC year 1 quarter 4 progress report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lottes, S.A.; Kulak, R.F.; Bojanowski, C.

    2011-12-09

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. Themore » analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFCHR wind engineering laboratory, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of July through September 2011.« less

  16. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  17. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One major concern is that the hot air ejected to the middle isle might recirculate back into the cool rack side and cause thermal short-cycling. The simulations analyzed and addressed the following important elements of the computer room: 1) High-temperature build-up in certain regions of the room; 2) Areas of low air circulation in the room; 3) Potential short-cycling of the computer rack cooling system; 4) Effectiveness of the perforated cooling floor tiles; 5) Effect of changes in various aspects of the cooling units. Detailed flow visualization is performed to show temperature distribution, air-flow streamlines and velocities in the computer room.

  18. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access these models. In some cases, the CME system also provides alternatives to the SCEC community models. The CME system hosts a collection of community geophysical software codes. These codes include seismic hazard analysis (SHA) programs developed by the SCEC/USGS OpenSHA group. Also, the CME system hosts anelastic wave propagation codes including Kim Olsen's Finite Difference code and Carnegie Mellon's Hercules Finite Element tool chain. The CME system can execute a workflow, that is, a series of geophysical computations using the output of one processing step as the input to a subsequent step. Our workflow capability utilizes grid-based computing software that can submit calculations to a pool of computing resources as well as data management tools that help us maintain an association between data files and metadata descriptions of those files. The CME system maintains, and provides access to, a collection of valuable geophysical data sets. The current CME Digital Library holdings include a collection of 60 ground motion simulation results calculated by a SCEC/PEER working group and a collection of Greens Functions calculated for 33 TriNet broadband receiver sites in the Los Angeles area.

  19. Advanced Biomedical Computing Center (ABCC) | DSITP

    Cancer.gov

    The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.

  20. New developments in delivering public access to data from the National Center for Computational Toxicology at the EPA

    EPA Science Inventory

    Researchers at EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The goal of this researc...

Top