Sample records for high computational effort

  1. U.S. EPA computational toxicology programs: Central role of chemical-annotation efforts and molecular databases

    EPA Science Inventory

    EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...

  2. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  3. [Earth Science Technology Office's Computational Technologies Project

    NASA Technical Reports Server (NTRS)

    Fischer, James (Technical Monitor); Merkey, Phillip

    2005-01-01

    This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.

  4. Staff | Computational Science | NREL

    Science.gov Websites

    develops and leads laboratory-wide efforts in high-performance computing and energy-efficient data centers Professional IV-High Perf Computing Jim.Albin@nrel.gov 303-275-4069 Ananthan, Shreyas Senior Scientist - High -Performance Algorithms and Modeling Shreyas.Ananthan@nrel.gov 303-275-4807 Bendl, Kurt IT Professional IV-High

  5. [Earth and Space Sciences Project Services for NASA HPCC

    NASA Technical Reports Server (NTRS)

    Merkey, Phillip

    2002-01-01

    This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.

  6. Office workers' computer use patterns are associated with workplace stressors.

    PubMed

    Eijckelhof, Belinda H W; Huysmans, Maaike A; Blatter, Birgitte M; Leider, Priscilla C; Johnson, Peter W; van Dieën, Jaap H; Dennerlein, Jack T; van der Beek, Allard J

    2014-11-01

    This field study examined associations between workplace stressors and office workers' computer use patterns. We collected keyboard and mouse activities of 93 office workers (68F, 25M) for approximately two work weeks. Linear regression analyses examined the associations between self-reported effort, reward, overcommitment, and perceived stress and software-recorded computer use duration, number of short and long computer breaks, and pace of input device usage. Daily duration of computer use was, on average, 30 min longer for workers with high compared to low levels of overcommitment and perceived stress. The number of short computer breaks (30 s-5 min long) was approximately 20% lower for those with high compared to low effort and for those with low compared to high reward. These outcomes support the hypothesis that office workers' computer use patterns vary across individuals with different levels of workplace stressors. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  7. Computer simulation and performance assessment of the packet-data service of the Aeronautical Mobile Satellite Service (AMSS)

    NASA Technical Reports Server (NTRS)

    Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory

    1995-01-01

    The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.

  8. High-Performance Computing: High-Speed Computer Networks in the United States, Europe, and Japan. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request from the Senate Committee on Commerce, Science, and Transportation, and from the House Committee on Science, Space, and Technology, for information on efforts to develop high-speed computer networks in the United States, Europe (limited to France, Germany, Italy, the Netherlands, and the United…

  9. Equal Time for Women.

    ERIC Educational Resources Information Center

    Kolata, Gina

    1984-01-01

    Examines social influences which discourage women from pursuing studies in computer science, including monopoly of computer time by boys at the high school level, sexual harassment in college, movies, and computer games. Describes some initial efforts to encourage females of all ages to study computer science. (JM)

  10. Experimental Evaluation and Workload Characterization for High-Performance Computer Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.

    1995-01-01

    This research is conducted in the context of the Joint NSF/NASA Initiative on Evaluation (JNNIE). JNNIE is an inter-agency research program that goes beyond typical.bencbking to provide and in-depth evaluations and understanding of the factors that limit the scalability of high-performance computing systems. Many NSF and NASA centers have participated in the effort. Our research effort was an integral part of implementing JNNIE in the NASA ESS grand challenge applications context. Our research work under this program was composed of three distinct, but related activities. They include the evaluation of NASA ESS high- performance computing testbeds using the wavelet decomposition application; evaluation of NASA ESS testbeds using astrophysical simulation applications; and developing an experimental model for workload characterization for understanding workload requirements. In this report, we provide a summary of findings that covers all three parts, a list of the publications that resulted from this effort, and three appendices with the details of each of the studies using a key publication developed under the respective work.

  11. Computational Methods for Stability and Control (COMSAC): The Time Has Come

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.

    2005-01-01

    Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.

  12. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C C

    The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less

  13. Computational Lipidomics and Lipid Bioinformatics: Filling In the Blanks.

    PubMed

    Pauling, Josch; Klipp, Edda

    2016-12-22

    Lipids are highly diverse metabolites of pronounced importance in health and disease. While metabolomics is a broad field under the omics umbrella that may also relate to lipids, lipidomics is an emerging field which specializes in the identification, quantification and functional interpretation of complex lipidomes. Today, it is possible to identify and distinguish lipids in a high-resolution, high-throughput manner and simultaneously with a lot of structural detail. However, doing so may produce thousands of mass spectra in a single experiment which has created a high demand for specialized computational support to analyze these spectral libraries. The computational biology and bioinformatics community has so far established methodology in genomics, transcriptomics and proteomics but there are many (combinatorial) challenges when it comes to structural diversity of lipids and their identification, quantification and interpretation. This review gives an overview and outlook on lipidomics research and illustrates ongoing computational and bioinformatics efforts. These efforts are important and necessary steps to advance the lipidomics field alongside analytic, biochemistry, biomedical and biology communities and to close the gap in available computational methodology between lipidomics and other omics sub-branches.

  14. Turbulence modeling of free shear layers for high performance aircraft

    NASA Technical Reports Server (NTRS)

    Sondak, Douglas

    1993-01-01

    In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.

  15. Support Expressed in Congress for U.S. High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2004-06-01

    Advocates for a stronger U.S. position in high-performance computing-which could help with a number of grand challenges in the Earth sciences and other disciplines-hope that legislation recently introduced in the House of Representatives, and, will help to revitalize U.S. efforts. The High-Performance Computing Revitalization Act of 2004 would amend the earlier High-Performance Computing Act of 1991 (Public Law 102-194), which is partially credited with helping to strengthen U.S. capabilities in this area. The bill has the support of the Bush administration.

  16. Emerging Neuromorphic Computing Architectures & Enabling Hardware for Cognitive Information Processing Applications

    DTIC Science & Technology

    2010-06-01

    DATES COVEREDAPR 2009 – JAN 2010 (From - To) APR 2009 – JAN 2010 4. TITLE AND SUBTITLE EMERGING NEUROMORPHIC COMPUTING ARCHITECTURES AND ENABLING...14. ABSTRACT The highly cross-disciplinary emerging field of neuromorphic computing architectures for cognitive information processing applications...belief systems, software, computer engineering, etc. In our effort to develop cognitive systems atop a neuromorphic computing architecture, we explored

  17. Note: The full function test explosive generator.

    PubMed

    Reisman, D B; Javedani, J B; Griffith, L V; Ellsworth, G F; Kuklo, R M; Goerz, D A; White, A D; Tallerico, L J; Gidding, D A; Murphy, M J; Chase, J B

    2010-03-01

    We have conducted three tests of a new pulsed power device called the full function test. These tests represented the culmination of an effort to establish a high energy pulsed power capability based on high explosive pulsed power (HEPP) technology. This involved an extensive computational modeling, engineering, fabrication, and fielding effort. The experiments were highly successful and a new U.S. record for magnetic energy was obtained.

  18. Distributed Accounting on the Grid

    NASA Technical Reports Server (NTRS)

    Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.

    2001-01-01

    By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.

  19. Computing Cluster for Large Scale Turbulence Simulations and Applications in Computational Aeroacoustics

    NASA Astrophysics Data System (ADS)

    Lele, Sanjiva K.

    2002-08-01

    Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.

  20. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C. C.

    The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less

  1. Tug-of-war lacunarity—A novel approach for estimating lacunarity

    NASA Astrophysics Data System (ADS)

    Reiss, Martin A.; Lemmerer, Birgit; Hanslmeier, Arnold; Ahammer, Helmut

    2016-11-01

    Modern instrumentation provides us with massive repositories of digital images that will likely only increase in the future. Therefore, it has become increasingly important to automatize the analysis of digital images, e.g., with methods from pattern recognition. These methods aim to quantify the visual appearance of captured textures with quantitative measures. As such, lacunarity is a useful multi-scale measure of texture's heterogeneity but demands high computational efforts. Here we investigate a novel approach based on the tug-of-war algorithm, which estimates lacunarity in a single pass over the image. We computed lacunarity for theoretical and real world sample images, and found that the investigated approach is able to estimate lacunarity with low uncertainties. We conclude that the proposed method combines low computational efforts with high accuracy, and that its application may have utility in the analysis of high-resolution images.

  2. High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects

    NASA Technical Reports Server (NTRS)

    Schutt-Aine, Jose E.

    1996-01-01

    The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.

  3. Computational nuclear quantum many-body problem: The UNEDF project

    NASA Astrophysics Data System (ADS)

    Bogner, S.; Bulgac, A.; Carlson, J.; Engel, J.; Fann, G.; Furnstahl, R. J.; Gandolfi, S.; Hagen, G.; Horoi, M.; Johnson, C.; Kortelainen, M.; Lusk, E.; Maris, P.; Nam, H.; Navratil, P.; Nazarewicz, W.; Ng, E.; Nobre, G. P. A.; Ormand, E.; Papenbrock, T.; Pei, J.; Pieper, S. C.; Quaglioni, S.; Roche, K. J.; Sarich, J.; Schunck, N.; Sosonkina, M.; Terasaki, J.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2013-10-01

    The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

  4. NASA HPCC Technology for Aerospace Analysis and Design

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine H.

    1999-01-01

    The Computational Aerosciences (CAS) Project is part of NASA's High Performance Computing and Communications Program. Its primary goal is to accelerate the availability of high-performance computing technology to the US aerospace community-thus providing the US aerospace community with key tools necessary to reduce design cycle times and increase fidelity in order to improve safety, efficiency and capability of future aerospace vehicles. A complementary goal is to hasten the emergence of a viable commercial market within the aerospace community for the advantage of the domestic computer hardware and software industry. The CAS Project selects representative aerospace problems (especially design) and uses them to focus efforts on advancing aerospace algorithms and applications, systems software, and computing machinery to demonstrate vast improvements in system performance and capability over the life of the program. Recent demonstrations have served to assess the benefits of possible performance improvements while reducing the risk of adopting high-performance computing technology. This talk will discuss past accomplishments in providing technology to the aerospace community, present efforts, and future goals. For example, the times to do full combustor and compressor simulations (of aircraft engines) have been reduced by factors of 320:1 and 400:1 respectively. While this has enabled new capabilities in engine simulation, the goal of an overnight, dynamic, multi-disciplinary, 3-dimensional simulation of an aircraft engine is still years away and will require new generations of high-end technology.

  5. Applied Information Systems Research Program (AISRP). Workshop 2: Meeting Proceedings

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Earth and space science participants were able to see where the current research can be applied in their disciplines and computer science participants could see potential areas for future application of computer and information systems research. The Earth and Space Science research proposals for the High Performance Computing and Communications (HPCC) program were under evaluation. Therefore, this effort was not discussed at the AISRP Workshop. OSSA's other high priority area in computer science is scientific visualization, with the entire second day of the workshop devoted to it.

  6. Reducing the Time and Cost of Testing Engines

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Producing a new aircraft engine currently costs approximately $1 billion, with 3 years of development time for a commercial engine and 10 years for a military engine. The high development time and cost make it extremely difficult to transition advanced technologies for cleaner, quieter, and more efficient new engines. To reduce this time and cost, NASA created a vision for the future where designers would use high-fidelity computer simulations early in the design process in order to resolve critical design issues before building the expensive engine hardware. To accomplish this vision, NASA's Glenn Research Center initiated a collaborative effort with the aerospace industry and academia to develop its Numerical Propulsion System Simulation (NPSS), an advanced engineering environment for the analysis and design of aerospace propulsion systems and components. Partners estimate that using NPSS has the potential to dramatically reduce the time, effort, and expense necessary to design and test jet engines by generating sophisticated computer simulations of an aerospace object or system. These simulations will permit an engineer to test various design options without having to conduct costly and time-consuming real-life tests. By accelerating and streamlining the engine system design analysis and test phases, NPSS facilitates bringing the final product to market faster. NASA's NPSS Version (V)1.X effort was a task within the Agency s Computational Aerospace Sciences project of the High Performance Computing and Communication program, which had a mission to accelerate the availability of high-performance computing hardware and software to the U.S. aerospace community for its use in design processes. The technology brings value back to NASA by improving methods of analyzing and testing space transportation components.

  7. Micro-video display with ocular tracking and interactive voice control

    NASA Technical Reports Server (NTRS)

    Miller, James E.

    1993-01-01

    In certain space-restricted environments, many of the benefits resulting from computer technology have been foregone because of the size, weight, inconvenience, and lack of mobility associated with existing computer interface devices. Accordingly, an effort to develop a highly miniaturized and 'wearable' computer display and control interface device, referred to as the Sensory Integrated Data Interface (SIDI), is underway. The system incorporates a micro-video display that provides data display and ocular tracking on a lightweight headset. Software commands are implemented by conjunctive eye movement and voice commands of the operator. In this initial prototyping effort, various 'off-the-shelf' components have been integrated into a desktop computer and with a customized menu-tree software application to demonstrate feasibility and conceptual capabilities. When fully developed as a customized system, the interface device will allow mobile, 'hand-free' operation of portable computer equipment. It will thus allow integration of information technology applications into those restrictive environments, both military and industrial, that have not yet taken advantage of the computer revolution. This effort is Phase 1 of Small Business Innovative Research (SBIR) Topic number N90-331 sponsored by the Naval Undersea Warfare Center Division, Newport. The prime contractor is Foster-Miller, Inc. of Waltham, MA.

  8. Langley's Computational Efforts in Sonic-Boom Softening of the Boeing HSCT

    NASA Technical Reports Server (NTRS)

    Fouladi, Kamran

    1999-01-01

    NASA Langley's computational efforts in the sonic-boom softening of the Boeing high-speed civil transport are discussed in this paper. In these efforts, an optimization process using a higher order Euler method for analysis was employed to reduce the sonic boom of a baseline configuration through fuselage camber and wing dihedral modifications. Fuselage modifications did not provide any improvements, but the dihedral modifications were shown to be an important tool for the softening process. The study also included aerodynamic and sonic-boom analyses of the baseline and some of the proposed "softened" configurations. Comparisons of two Euler methodologies and two propagation programs for sonic-boom predictions are also discussed in the present paper.

  9. A Case Study of the Introduction of RISC-based Computing and a Telecommunications Link to a Suburban High School.

    ERIC Educational Resources Information Center

    Hakerem, Gita; And Others

    This study reports the efforts of the Water and Molecular Networks Project (WAMNet), a program in which high school chemistry students use computer simulations developed at Boston University (Massachusetts) to model the three-dimensional structure of molecules and the hydrogen bond network that holds water molecules together. This case study…

  10. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Mcewan, S. D.; Spry, A. J.

    1985-01-01

    Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.

  11. Pseudo-point transport technique: a new method for solving the Boltzmann transport equation in media with highly fluctuating cross sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakhai, B.

    A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less

  12. A Case Study on Collective Cognition and Operation in Team-Based Computer Game Design by Middle-School Children

    ERIC Educational Resources Information Center

    Ke, Fengfeng; Im, Tami

    2014-01-01

    This case study examined team-based computer-game design efforts by children with diverse abilities to explore the nature of their collective design actions and cognitive processes. Ten teams of middle-school children, with a high percentage of minority students, participated in a 6-weeks, computer-assisted math-game-design program. Essential…

  13. Using Computer Games to Train Information Warfare Teams

    DTIC Science & Technology

    2004-01-01

    Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2004 2004 Paper No 1729 Page 1 of 10 Using Computer Games to...responses they will experience on real missions is crucial. 3D computer games have proved themselves to be highly effective in engaging players...motivationally and emotionally. This effort, therefore, uses gaming technology to provide realistic simulations. These games are augmented with

  14. Writing. A Research-Based Writing Program for Students with High Access to Computers. ACOT Report #2.

    ERIC Educational Resources Information Center

    Hiebert, Elfrieda H.; And Others

    This report summarizes the curriculum development and research effort that took place at the Cupertino Apple Classrooms of Tomorrow (ACOT) site from January through June 1987. Based on the premise that computers make revising and editing much easier, the four major objectives emphasized by the computer-intensive writing program are fluency,…

  15. An Integrated In Vitro and Computational Approach to Define the Exposure-Dose-Toxicity Relationships In High-Throughput Screens

    EPA Science Inventory

    Research efforts by the US Environmental Protection Agency have set out to develop alternative testing programs to prioritize limited testing resources toward chemicals that likely represent the greatest hazard to human health and the environment. Efforts such as EPA’s ToxCast r...

  16. The Budget of the United States Government. Department of Defense Extract for Fiscal Year 1985

    DTIC Science & Technology

    1984-02-01

    with real GNP rising over 6% and industrial production by 16%. Unemployment, though still unacceptably high , has declined by a record 2l/z...the administration will focus its legislative effort on three of those proposals, in modified form: cost-of-living adjustment (COLA) reform, a high 5...computer matching, adjusted payment schedules, contractor and grantee performance incentives, and a streamlined field structure. All of these efforts

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McInerney, J.D.; Micikas, L.B.

    Efforts are described to prepare educational materials including computer based as well as conventional type teaching materials for training interested high school and elementary students in aspects of Human Genome Project.

  18. Verification and Validation of Monte Carlo N-Particle 6 for Computing Gamma Protection Factors

    DTIC Science & Technology

    2015-03-26

    methods for evaluating RPFs, which it used for the subsequent 30 years. These approaches included computational modeling, radioisotopes , and a high...1.2.1. Past Methods of Experimental Evaluation ........................................................ 2 1.2.2. Modeling Efforts...Other Considerations ......................................................................................... 14 2.4. Monte Carlo Methods

  19. Computational AeroAcoustics for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Ed; Hixon, Ray; Dyson, Rodger; Huff, Dennis (Technical Monitor)

    2002-01-01

    An overview of the current state-of-the-art in computational aeroacoustics as applied to fan noise prediction at NASA Glenn is presented. Results from recent modeling efforts using three dimensional inviscid formulations in both frequency and time domains are summarized. In particular, the application of a frequency domain method, called LINFLUX, to the computation of rotor-stator interaction tone noise is reviewed and the influence of the background inviscid flow on the acoustic results is analyzed. It has been shown that the noise levels are very sensitive to the gradients of the mean flow near the surface and that the correct computation of these gradients for highly loaded airfoils is especially problematic using an inviscid formulation. The ongoing development of a finite difference time marching code that is based on a sixth order compact scheme is also reviewed. Preliminary results from the nonlinear computation of a gust-airfoil interaction model problem demonstrate the fidelity and accuracy of this approach. Spatial and temporal features of the code as well as its multi-block nature are discussed. Finally, latest results from an ongoing effort in the area of arbitrarily high order methods are reviewed and technical challenges associated with implementing correct high order boundary conditions are discussed and possible strategies for addressing these challenges ore outlined.

  20. 1999 NCCS Highlights

    NASA Technical Reports Server (NTRS)

    Bennett, Jerome (Technical Monitor)

    2002-01-01

    The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.

  1. High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME

    NASA Astrophysics Data System (ADS)

    Otis, Richard A.; Liu, Zi-Kui

    2017-05-01

    One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.

  2. Investigation of Grid Adaptation to Reduce Computational Efforts for a 2-D Hydrogen-Fueled Dual-Mode Scramjet

    NASA Astrophysics Data System (ADS)

    Foo, Kam Keong

    A two-dimensional dual-mode scramjet flowpath is developed and evaluated using the ANSYS Fluent density-based flow solver with various computational grids. Results are obtained for fuel-off, fuel-on non-reacting, and fuel-on reacting cases at different equivalence ratios. A one-step global chemical kinetics hydrogen-air model is used in conjunction with the eddy-dissipation model. Coarse, medium and fine computational grids are used to evaluate grid sensitivity and to investigate a lack of grid independence. Different grid adaptation strategies are performed on the coarse grid in an attempt to emulate the solutions obtained from the finer grids. The goal of this study is to investigate the feasibility of using various mesh adaptation criteria to significantly decrease computational efforts for high-speed reacting flows.

  3. Brain transcriptome atlases: a computational perspective.

    PubMed

    Mahfouz, Ahmed; Huisman, Sjoerd M H; Lelieveldt, Boudewijn P F; Reinders, Marcel J T

    2017-05-01

    The immense complexity of the mammalian brain is largely reflected in the underlying molecular signatures of its billions of cells. Brain transcriptome atlases provide valuable insights into gene expression patterns across different brain areas throughout the course of development. Such atlases allow researchers to probe the molecular mechanisms which define neuronal identities, neuroanatomy, and patterns of connectivity. Despite the immense effort put into generating such atlases, to answer fundamental questions in neuroscience, an even greater effort is needed to develop methods to probe the resulting high-dimensional multivariate data. We provide a comprehensive overview of the various computational methods used to analyze brain transcriptome atlases.

  4. 5 CFR 630.310 - Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...

  5. 5 CFR 630.310 - Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...

  6. 5 CFR 630.310 - Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...

  7. 5 CFR 630.310 - Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...

  8. 5 CFR 630.310 - Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...

  9. Robotic tape library system level testing at NSA: Present and planned

    NASA Technical Reports Server (NTRS)

    Shields, Michael F.

    1994-01-01

    In the present of declining Defense budgets, increased pressure has been placed on the DOD to utilize Commercial Off the Shelf (COTS) solutions to incrementally solve a wide variety of our computer processing requirements. With the rapid growth in processing power, significant expansion of high performance networking, and the increased complexity of applications data sets, the requirement for high performance, large capacity, reliable and secure, and most of all affordable robotic tape storage libraries has greatly increased. Additionally, the migration to a heterogeneous, distributed computing environment has further complicated the problem. With today's open system compute servers approaching yesterday's supercomputer capabilities, the need for affordable, reliable secure Mass Storage Systems (MSS) has taken on an ever increasing importance to our processing center's ability to satisfy operational mission requirements. To that end, NSA has established an in-house capability to acquire, test, and evaluate COTS products. Its goal is to qualify a set of COTS MSS libraries, thereby achieving a modicum of standardization for robotic tape libraries which can satisfy our low, medium, and high performance file and volume serving requirements. In addition, NSA has established relations with other Government Agencies to complete this in-house effort and to maximize our research, testing, and evaluation work. While the preponderance of the effort is focused at the high end of the storage ladder, considerable effort will be extended this year and next at the server class or mid range storage systems.

  10. Automated Boundary Conditions for Wind Tunnel Simulations

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee

    2018-01-01

    Computational fluid dynamic (CFD) simulations of models tested in wind tunnels require a high level of fidelity and accuracy particularly for the purposes of CFD validation efforts. Considerable effort is required to ensure the proper characterization of both the physical geometry of the wind tunnel and recreating the correct flow conditions inside the wind tunnel. The typical trial-and-error effort used for determining the boundary condition values for a particular tunnel configuration are time and computer resource intensive. This paper describes a method for calculating and updating the back pressure boundary condition in wind tunnel simulations by using a proportional-integral-derivative controller. The controller methodology and equations are discussed, and simulations using the controller to set a tunnel Mach number in the NASA Langley 14- by 22-Foot Subsonic Tunnel are demonstrated.

  11. Computer architecture evaluation for structural dynamics computations: Project summary

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  12. Advances in computational design and analysis of airbreathing propulsion systems

    NASA Technical Reports Server (NTRS)

    Klineberg, John M.

    1989-01-01

    The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.

  13. ProjectQ Software Framework

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Haener, Thomas; Troyer, Matthias

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.

  14. Parallel aeroelastic computations for wing and wing-body configurations

    NASA Technical Reports Server (NTRS)

    Byun, Chansup

    1994-01-01

    The objective of this research is to develop computationally efficient methods for solving fluid-structural interaction problems by directly coupling finite difference Euler/Navier-Stokes equations for fluids and finite element dynamics equations for structures on parallel computers. This capability will significantly impact many aerospace projects of national importance such as Advanced Subsonic Civil Transport (ASCT), where the structural stability margin becomes very critical at the transonic region. This research effort will have direct impact on the High Performance Computing and Communication (HPCC) Program of NASA in the area of parallel computing.

  15. Using a cloud to replenish parched groundwater modeling efforts.

    PubMed

    Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  16. Using a cloud to replenish parched groundwater modeling efforts

    USGS Publications Warehouse

    Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  17. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing

    PubMed Central

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  18. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    PubMed

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.

  19. Defining Computational Thinking for Mathematics and Science Classrooms

    NASA Astrophysics Data System (ADS)

    Weintrop, David; Beheshti, Elham; Horn, Michael; Orton, Kai; Jona, Kemi; Trouille, Laura; Wilensky, Uri

    2016-02-01

    Science and mathematics are becoming computational endeavors. This fact is reflected in the recently released Next Generation Science Standards and the decision to include "computational thinking" as a core scientific practice. With this addition, and the increased presence of computation in mathematics and scientific contexts, a new urgency has come to the challenge of defining computational thinking and providing a theoretical grounding for what form it should take in school science and mathematics classrooms. This paper presents a response to this challenge by proposing a definition of computational thinking for mathematics and science in the form of a taxonomy consisting of four main categories: data practices, modeling and simulation practices, computational problem solving practices, and systems thinking practices. In formulating this taxonomy, we draw on the existing computational thinking literature, interviews with mathematicians and scientists, and exemplary computational thinking instructional materials. This work was undertaken as part of a larger effort to infuse computational thinking into high school science and mathematics curricular materials. In this paper, we argue for the approach of embedding computational thinking in mathematics and science contexts, present the taxonomy, and discuss how we envision the taxonomy being used to bring current educational efforts in line with the increasingly computational nature of modern science and mathematics.

  20. Neurocomputational mechanisms underlying subjective valuation of effort costs

    PubMed Central

    Giehl, Kathrin; Sillence, Annie

    2017-01-01

    In everyday life, we have to decide whether it is worth exerting effort to obtain rewards. Effort can be experienced in different domains, with some tasks requiring significant cognitive demand and others being more physically effortful. The motivation to exert effort for reward is highly subjective and varies considerably across the different domains of behaviour. However, very little is known about the computational or neural basis of how different effort costs are subjectively weighed against rewards. Is there a common, domain-general system of brain areas that evaluates all costs and benefits? Here, we used computational modelling and functional magnetic resonance imaging (fMRI) to examine the mechanisms underlying value processing in both the cognitive and physical domains. Participants were trained on two novel tasks that parametrically varied either cognitive or physical effort. During fMRI, participants indicated their preferences between a fixed low-effort/low-reward option and a variable higher-effort/higher-reward offer for each effort domain. Critically, reward devaluation by both cognitive and physical effort was subserved by a common network of areas, including the dorsomedial and dorsolateral prefrontal cortex, the intraparietal sulcus, and the anterior insula. Activity within these domain-general areas also covaried negatively with reward and positively with effort, suggesting an integration of these parameters within these areas. Additionally, the amygdala appeared to play a unique, domain-specific role in processing the value of rewards associated with cognitive effort. These results are the first to reveal the neurocomputational mechanisms underlying subjective cost–benefit valuation across different domains of effort and provide insight into the multidimensional nature of motivation. PMID:28234892

  1. The Helicopter Antenna Radiation Prediction Code (HARP)

    NASA Technical Reports Server (NTRS)

    Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.

    1990-01-01

    The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.

  2. High resolution frequency analysis techniques with application to the redshift experiment

    NASA Technical Reports Server (NTRS)

    Decher, R.; Teuber, D.

    1975-01-01

    High resolution frequency analysis methods, with application to the gravitational probe redshift experiment, are discussed. For this experiment a resolution of .00001 Hz is required to measure a slowly varying, low frequency signal of approximately 1 Hz. Major building blocks include fast Fourier transform, discrete Fourier transform, Lagrange interpolation, golden section search, and adaptive matched filter technique. Accuracy, resolution, and computer effort of these methods are investigated, including test runs on an IBM 360/65 computer.

  3. SUPREM-DSMC: A New Scalable, Parallel, Reacting, Multidimensional Direct Simulation Monte Carlo Flow Code

    NASA Technical Reports Server (NTRS)

    Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas

    2000-01-01

    An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.

  4. Numerical Simulation of High-Speed Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Jaberi, F. A.; Colucci, P. J.; James, S.; Givi, P.

    1996-01-01

    The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES) methods for computational analysis of high-speed reacting turbulent flows. We have just completed the first year of Phase 3 of this research.

  5. HPC Access Using KVM over IP

    DTIC Science & Technology

    2007-06-08

    Lightwave VDE /200 KVM-over-Fiber (Keyboard, Video and Mouse) devices installed throughout the TARDEC campus. Implementation of this system required...development effort through the pursuit of an Army-funded Phase-II Small Business Innovative Research (SBIR) effort with IP Video Systems (formerly known as...visualization capabilities of a DoD High- Performance Computing facility, many advanced features are necessary. TARDEC-HPC’s SBIR with IP Video Systems

  6. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less

  7. Computational analysis of high resolution unsteady airloads for rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Lam, C.-M. Gordon; Wachspress, Daniel A.; Bliss, Donald B.

    1994-01-01

    The study of helicopter aerodynamic loading for acoustics applications requires the application of efficient yet accurate simulations of the velocity field induced by the rotor's vortex wake. This report summarizes work to date on the development of such an analysis, which builds on the Constant Vorticity Contour (CVC) free wake model, previously implemented for the study of vibratory loading in the RotorCRAFT computer code. The present effort has focused on implementation of an airload reconstruction approach that computes high resolution airload solutions of rotor/rotor-wake interactions required for acoustics computations. Supplementary efforts on the development of improved vortex core modeling, unsteady aerodynamic effects, higher spatial resolution of rotor loading, and fast vortex wake implementations have substantially enhanced the capabilities of the resulting software, denoted RotorCRAFT/AA (AeroAcoustics). Results of validation calculations using recently acquired model rotor data show that by employing airload reconstruction it is possible to apply the CVC wake analysis with temporal and spatial resolution suitable for acoustics applications while reducing the computation time required by one to two orders of magnitude relative to that required by direct calculations. Promising correlation with this body of airload and noise data has been obtained for a variety of rotor configurations and operating conditions.

  8. Observed differences in upper extremity forces, muscle efforts, postures, velocities and accelerations across computer activities in a field study of office workers.

    PubMed

    Bruno Garza, J L; Eijckelhof, B H W; Johnson, P W; Raina, S M; Rynell, P W; Huysmans, M A; van Dieën, J H; van der Beek, A J; Blatter, B M; Dennerlein, J T

    2012-01-01

    This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120 office workers performing their own work for two hours each. There were differences in nearly all forces, muscle efforts, postures, velocities and accelerations across keyboard, mouse and idle activities. Keyboard activities showed a 50% increase in the median right trapezius muscle effort when compared to mouse activities. Median shoulder rotation changed from 25 degrees internal rotation during keyboard use to 15 degrees external rotation during mouse use. Only keyboard use was associated with median ulnar deviations greater than 5 degrees. Idle activities led to the greatest variability observed in all muscle efforts and postures measured. In future studies, measurements of computer activities could be used to provide information on the physical exposures experienced during computer use. Practitioner Summary: Computer users may develop musculoskeletal disorders due to their force, muscle effort, posture and wrist velocity and acceleration exposures during computer use. We report that many physical exposures are different across computer activities. This information may be used to estimate physical exposures based on patterns of computer activities over time.

  9. The High-Performance Computing and Communications program, the national information infrastructure and health care.

    PubMed Central

    Lindberg, D A; Humphreys, B L

    1995-01-01

    The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116

  10. Cost Allocation

    DOT National Transportation Integrated Search

    2000-03-09

    The Texas Department of Transportations (TxDOT) "smart highway" project, called TransGuide, scheduled to go on line in 1995 in San Antonio, utilizes high speed computer technology to help drivers anticipate traffic conditions-- in an effort to inc...

  11. Geoinformatics 2007: data to knowledge

    USGS Publications Warehouse

    Brady, Shailaja R.; Sinha, A. Krishna; Gundersen, Linda C.

    2007-01-01

    Geoinformatics is the term used to describe a variety of efforts to promote collaboration between the computer sciences and the geosciences to solve complex scientific questions. It refers to the distributed, integrated digital information system and working environment that provides innovative means for the study of the Earth systems, as well as other planets, through use of advanced information technologies. Geoinformatics activities range from major research and development efforts creating new technologies to provide high-quality, sustained production-level services for data discovery, integration and analysis, to small, discipline-specific efforts that develop earth science data collections and data analysis tools serving the needs of individual communities. The ultimate vision of Geoinformatics is a highly interconnected data system populated with high quality, freely available data, as well as, a robust set of software for analysis, visualization, and modeling.

  12. Bridges to Swaziland: Using Task-Based Learning and Computer-Mediated Instruction to Improve English Language Teaching and Learning

    ERIC Educational Resources Information Center

    Pierson, Susan Jacques

    2015-01-01

    One way to provide high quality instruction for underserved English Language Learners around the world is to combine Task-Based English Language Learning with Computer- Assisted Instruction. As part of an ongoing project, "Bridges to Swaziland," these approaches have been implemented in a determined effort to improve the ESL program for…

  13. The CSM testbed software system: A development environment for structural analysis methods on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Gillian, Ronnie E.; Lotts, Christine G.

    1988-01-01

    The Computational Structural Mechanics (CSM) Activity at Langley Research Center is developing methods for structural analysis on modern computers. To facilitate that research effort, an applications development environment has been constructed to insulate the researcher from the many computer operating systems of a widely distributed computer network. The CSM Testbed development system was ported to the Numerical Aerodynamic Simulator (NAS) Cray-2, at the Ames Research Center, to provide a high end computational capability. This paper describes the implementation experiences, the resulting capability, and the future directions for the Testbed on supercomputers.

  14. Framework for architecture-independent run-time reconfigurable applications

    NASA Astrophysics Data System (ADS)

    Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.

    2000-10-01

    Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.

  15. Louisiana: a model for advancing regional e-Research through cyberinfrastructure.

    PubMed

    Katz, Daniel S; Allen, Gabrielle; Cortez, Ricardo; Cruz-Neira, Carolina; Gottumukkala, Raju; Greenwood, Zeno D; Guice, Les; Jha, Shantenu; Kolluru, Ramesh; Kosar, Tevfik; Leger, Lonnie; Liu, Honggao; McMahon, Charlie; Nabrzyski, Jarek; Rodriguez-Milla, Bety; Seidel, Ed; Speyrer, Greg; Stubblefield, Michael; Voss, Brian; Whittenburg, Scott

    2009-06-28

    Louisiana researchers and universities are leading a concentrated, collaborative effort to advance statewide e-Research through a new cyberinfrastructure: computing systems, data storage systems, advanced instruments and data repositories, visualization environments and people, all linked together by software programs and high-performance networks. This effort has led to a set of interlinked projects that have started making a significant difference in the state, and has created an environment that encourages increased collaboration, leading to new e-Research. This paper describes the overall effort, the new projects and environment and the results to date.

  16. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  17. Computational Benefits Using an Advanced Concatenation Scheme Based on Reduced Order Models for RF Structures

    NASA Astrophysics Data System (ADS)

    Heller, Johann; Flisgen, Thomas; van Rienen, Ursula

    The computation of electromagnetic fields and parameters derived thereof for lossless radio frequency (RF) structures filled with isotropic media is an important task for the design and operation of particle accelerators. Unfortunately, these computations are often highly demanding with regard to computational effort. The entire computational demand of the problem can be reduced using decomposition schemes in order to solve the field problems on standard workstations. This paper presents one of the first detailed comparisons between the recently proposed state-space concatenation approach (SSC) and a direct computation for an accelerator cavity with coupler-elements that break the rotational symmetry.

  18. National research and education network

    NASA Technical Reports Server (NTRS)

    Villasenor, Tony

    1991-01-01

    Some goals of this network are as follows: Extend U.S. technological leadership in high performance computing and computer communications; Provide wide dissemination and application of the technologies both to the speed and the pace of innovation and to serve the national economy, national security, education, and the global environment; and Spur gains in the U.S. productivity and industrial competitiveness by making high performance computing and networking technologies an integral part of the design and production process. Strategies for achieving these goals are as follows: Support solutions to important scientific and technical challenges through a vigorous R and D effort; Reduce the uncertainties to industry for R and D and use of this technology through increased cooperation between government, industry, and universities and by the continued use of government and government funded facilities as a prototype user for early commercial HPCC products; and Support underlying research, network, and computational infrastructures on which U.S. high performance computing technology is based.

  19. High performance transcription factor-DNA docking with GPU computing

    PubMed Central

    2012-01-01

    Background Protein-DNA docking is a very challenging problem in structural bioinformatics and has important implications in a number of applications, such as structure-based prediction of transcription factor binding sites and rational drug design. Protein-DNA docking is very computational demanding due to the high cost of energy calculation and the statistical nature of conformational sampling algorithms. More importantly, experiments show that the docking quality depends on the coverage of the conformational sampling space. It is therefore desirable to accelerate the computation of the docking algorithm, not only to reduce computing time, but also to improve docking quality. Methods In an attempt to accelerate the sampling process and to improve the docking performance, we developed a graphics processing unit (GPU)-based protein-DNA docking algorithm. The algorithm employs a potential-based energy function to describe the binding affinity of a protein-DNA pair, and integrates Monte-Carlo simulation and a simulated annealing method to search through the conformational space. Algorithmic techniques were developed to improve the computation efficiency and scalability on GPU-based high performance computing systems. Results The effectiveness of our approach is tested on a non-redundant set of 75 TF-DNA complexes and a newly developed TF-DNA docking benchmark. We demonstrated that the GPU-based docking algorithm can significantly accelerate the simulation process and thereby improving the chance of finding near-native TF-DNA complex structures. This study also suggests that further improvement in protein-DNA docking research would require efforts from two integral aspects: improvement in computation efficiency and energy function design. Conclusions We present a high performance computing approach for improving the prediction accuracy of protein-DNA docking. The GPU-based docking algorithm accelerates the search of the conformational space and thus increases the chance of finding more near-native structures. To the best of our knowledge, this is the first ad hoc effort of applying GPU or GPU clusters to the protein-DNA docking problem. PMID:22759575

  20. Internal fluid mechanics research on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.

    1988-01-01

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.

  1. Large Scale Computing and Storage Requirements for High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less

  2. High level cognitive information processing in neural networks

    NASA Technical Reports Server (NTRS)

    Barnden, John A.; Fields, Christopher A.

    1992-01-01

    Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field.

  3. Advertising down, but marketing spending climbs; survey says focus is on targeting, not overviews.

    PubMed

    Burns, J

    1992-02-03

    Hard economic times forced hospitals nationwide to cut their advertising budgets in 1991. But marketing expenditures hit an all-time high, as hospitals shifted the emphasis to high-technology marketing efforts that employ computer data bases and market research to seek and find potential business.

  4. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less

  5. Exemplar for simulation challenges: Large-deformation micromechanics of Sylgard 184/glass microballoon syntactic foams.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Judith Alice; Long, Kevin Nicholas

    2018-05-01

    Sylgard® 184/Glass Microballoon (GMB) potting material is currently used in many NW systems. Analysts need a macroscale constitutive model that can predict material behavior under complex loading and damage evolution. To address this need, ongoing modeling and experimental efforts have focused on study of damage evolution in these materials. Micromechanical finite element simulations that resolve individual GMB and matrix components promote discovery and better understanding of the material behavior. With these simulations, we can study the role of the GMB volume fraction, time-dependent damage, behavior under confined vs. unconfined compression, and the effects of partial damage. These simulations are challengingmore » and push the boundaries of capability even with the high performance computing tools available at Sandia. We summarize the major challenges and the current state of this modeling effort, as an exemplar of micromechanical modeling needs that can motivate advances in future computing efforts.« less

  6. High-resolution PET [Positron Emission Tomography] for Medical Science Studies

    DOE R&D Accomplishments Database

    Budinger, T. F.; Derenzo, S. E.; Huesman, R. H.; Jagust, W. J.; Valk, P. E.

    1989-09-01

    One of the unexpected fruits of basic physics research and the computer revolution is the noninvasive imaging power available to today's physician. Technologies that were strictly the province of research scientists only a decade or two ago now serve as the foundations for such standard diagnostic tools as x-ray computer tomography (CT), magnetic resonance imaging (MRI), magnetic resonance spectroscopy (MRS), ultrasound, single photon emission computed tomography (SPECT), and positron emission tomography (PET). Furthermore, prompted by the needs of both the practicing physician and the clinical researcher, efforts to improve these technologies continue. This booklet endeavors to describe the advantages of achieving high resolution in PET imaging.

  7. Computation of Large Turbulence Structures and Noise of Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Tam, Christopher

    1996-01-01

    Our research effort concentrated on obtaining an understanding of the generation mechanisms and the prediction of the three components of supersonic jet noise. In addition, we also developed a computational method for calculating the mean flow of turbulent high-speed jets. Below is a short description of the highlights of our contributions in each of these areas: (a) Broadband shock associated noise, (b) Turbulent mixing noise, (c) Screech tones and impingement tones, (d) Computation of the mean flow of turbulent jets.

  8. Accelerated Application Development: The ORNL Titan Experience

    DOE PAGES

    Joubert, Wayne; Archibald, Richard K.; Berrill, Mark A.; ...

    2015-05-09

    The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less

  9. Accelerated application development: The ORNL Titan experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, Wayne; Archibald, Rick; Berrill, Mark

    2015-08-01

    The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less

  10. Wild Fire Computer Model Helps Firefighters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canfield, Jesse

    2012-09-04

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  11. Wild Fire Computer Model Helps Firefighters

    ScienceCinema

    Canfield, Jesse

    2018-02-14

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  12. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chokchai "Box" Leangsuksun

    2011-05-31

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  13. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  14. Non-conforming finite-element formulation for cardiac electrophysiology: an effective approach to reduce the computation time of heart simulations without compromising accuracy

    NASA Astrophysics Data System (ADS)

    Hurtado, Daniel E.; Rojas, Guillermo

    2018-04-01

    Computer simulations constitute a powerful tool for studying the electrical activity of the human heart, but computational effort remains prohibitively high. In order to recover accurate conduction velocities and wavefront shapes, the mesh size in linear element (Q1) formulations cannot exceed 0.1 mm. Here we propose a novel non-conforming finite-element formulation for the non-linear cardiac electrophysiology problem that results in accurate wavefront shapes and lower mesh-dependance in the conduction velocity, while retaining the same number of global degrees of freedom as Q1 formulations. As a result, coarser discretizations of cardiac domains can be employed in simulations without significant loss of accuracy, thus reducing the overall computational effort. We demonstrate the applicability of our formulation in biventricular simulations using a coarse mesh size of ˜ 1 mm, and show that the activation wave pattern closely follows that obtained in fine-mesh simulations at a fraction of the computation time, thus improving the accuracy-efficiency trade-off of cardiac simulations.

  15. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.

    1991-01-01

    This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.

  16. Application of CFD to the analysis and design of high-speed inlets

    NASA Technical Reports Server (NTRS)

    Rose, William C.

    1995-01-01

    Over the past seven years, efforts under the present Grant have been aimed at being able to apply modern Computational Fluid Dynamics to the design of high-speed engine inlets. In this report, a review of previous design capabilities (prior to the advent of functioning CFD) was presented and the example of the NASA 'Mach 5 inlet' design was given as the premier example of the historical approach to inlet design. The philosophy used in the Mach 5 inlet design was carried forward in the present study, in which CFD was used to design a new Mach 10 inlet. An example of an inlet redesign was also shown. These latter efforts were carried out using today's state-of-the-art, full computational fluid dynamics codes applied in an iterative man-in-the-loop technique. The potential usefulness of an automated machine design capability using an optimizer code was also discussed.

  17. A Large-Scale, High-Resolution Hydrological Model Parameter Data Set for Climate Change Impact Assessment for the Conterminous US

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oubeidillah, Abdoul A; Kao, Shih-Chieh; Ashfaq, Moetasim

    2014-01-01

    To extend geographical coverage, refine spatial resolution, and improve modeling efficiency, a computation- and data-intensive effort was conducted to organize a comprehensive hydrologic dataset with post-calibrated model parameters for hydro-climate impact assessment. Several key inputs for hydrologic simulation including meteorologic forcings, soil, land class, vegetation, and elevation were collected from multiple best-available data sources and organized for 2107 hydrologic subbasins (8-digit hydrologic units, HUC8s) in the conterminous United States at refined 1/24 (~4 km) spatial resolution. Using high-performance computing for intensive model calibration, a high-resolution parameter dataset was prepared for the macro-scale Variable Infiltration Capacity (VIC) hydrologic model. The VICmore » simulation was driven by DAYMET daily meteorological forcing and was calibrated against USGS WaterWatch monthly runoff observations for each HUC8. The results showed that this new parameter dataset may help reasonably simulate runoff at most US HUC8 subbasins. Based on this exhaustive calibration effort, it is now possible to accurately estimate the resources required for further model improvement across the entire conterminous United States. We anticipate that through this hydrologic parameter dataset, the repeated effort of fundamental data processing can be lessened, so that research efforts can emphasize the more challenging task of assessing climate change impacts. The pre-organized model parameter dataset will be provided to interested parties to support further hydro-climate impact assessment.« less

  18. Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergman, Keren

    Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformationalmore » advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM architectures on Exascale computing systems.« less

  19. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  20. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  1. Methods for Prediction of High-Speed Reacting Flows in Aerospace Propulsion

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    2014-01-01

    Research to develop high-speed airbreathing aerospace propulsion systems was underway in the late 1950s. A major part of the effort involved the supersonic combustion ramjet, or scramjet, engine. Work had also begun to develop computational techniques for solving the equations governing the flow through a scramjet engine. However, scramjet technology and the computational methods to assist in its evolution would remain apart for another decade. The principal barrier was that the computational methods needed for engine evolution lacked the computer technology required for solving the discrete equations resulting from the numerical methods. Even today, computer resources remain a major pacing item in overcoming this barrier. Significant advances have been made over the past 35 years, however, in modeling the supersonic chemically reacting flow in a scramjet combustor. To see how scramjet development and the required computational tools finally merged, we briefly trace the evolution of the technology in both areas.

  2. Louisiana: a model for advancing regional e-Research through cyberinfrastructure

    PubMed Central

    Katz, Daniel S.; Allen, Gabrielle; Cortez, Ricardo; Cruz-Neira, Carolina; Gottumukkala, Raju; Greenwood, Zeno D.; Guice, Les; Jha, Shantenu; Kolluru, Ramesh; Kosar, Tevfik; Leger, Lonnie; Liu, Honggao; McMahon, Charlie; Nabrzyski, Jarek; Rodriguez-Milla, Bety; Seidel, Ed; Speyrer, Greg; Stubblefield, Michael; Voss, Brian; Whittenburg, Scott

    2009-01-01

    Louisiana researchers and universities are leading a concentrated, collaborative effort to advance statewide e-Research through a new cyberinfrastructure: computing systems, data storage systems, advanced instruments and data repositories, visualization environments and people, all linked together by software programs and high-performance networks. This effort has led to a set of interlinked projects that have started making a significant difference in the state, and has created an environment that encourages increased collaboration, leading to new e-Research. This paper describes the overall effort, the new projects and environment and the results to date. PMID:19451102

  3. LES, DNS and RANS for the analysis of high-speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Adumitroaie, V.; Colucci, P. J.; Taulbee, D. B.; Givi, P.

    1995-01-01

    The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES), direct numerical simulation (DNS), and Reynolds averaged Navier Stokes (RANS) methods for the computational analysis of high-speed reacting turbulent flows. In the second phase of this work, covering the period 1 Aug. 1994 - 31 Jul. 1995, we have focused our efforts on two programs: (1) developments of explicit algebraic moment closures for statistical descriptions of compressible reacting flows and (2) development of Monte Carlo numerical methods for LES of chemically reacting flows.

  4. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  5. Identifying Differences between Depressed Adolescent Suicide Ideators and Attempters

    PubMed Central

    Auerbach, Randy P.; Millner, Alexander J.; Stewart, Jeremy G.; Esposito, Erika

    2015-01-01

    Background Adolescent depression and suicide are pressing public health concerns, and identifying key differences among suicide ideators and attempters is critical. The goal of the current study is to test whether depressed adolescent suicide attempters report greater anhedonia severity and exhibit aberrant effort-cost computations in the face of uncertainty. Methods Depressed adolescents (n = 101) ages 13–19 years were administered structured clinical interviews to assess current mental health disorders and a history of suicidality (suicide ideators = 55, suicide attempters = 46). Then, participants completed self-report instruments assessing symptoms of suicidal ideation, depression, anhedonia, and anxiety as well as a computerized effort-cost computation task. Results Compared with depressed adolescent suicide ideators, attempters report greater anhedonia severity, even after concurrently controlling for symptoms of suicidal ideation, depression, and anxiety. Additionally, when completing the effort-cost computation task, suicide attempters are less likely to pursue the difficult, high value option when outcomes are uncertain. Follow-up, trial-level analyses of effort-cost computations suggest that receipt of reward does not influence future decision-making among suicide attempters, however, suicide ideators exhibit a win-stay approach when receiving rewards on previous trials. Limitations Findings should be considered in light of limitations including a modest sample size, which limits generalizability, and the cross-sectional design. Conclusions Depressed adolescent suicide attempters are characterized by greater anhedonia severity, which may impair the ability to integrate previous rewarding experiences to inform future decisions. Taken together, this may generate a feeling of powerlessness that contributes to increased suicidality and a needless loss of life. PMID:26233323

  6. Space mapping method for the design of passive shields

    NASA Astrophysics Data System (ADS)

    Sergeant, Peter; Dupré, Luc; Melkebeek, Jan

    2006-04-01

    The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.

  7. Improve SSME power balance model

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.

    1992-01-01

    Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.

  8. Crop-phenology and LANDSAT-based irrigated lands inventory in the high plains. [Texas, New Mexico, Oklahma, Kansas, Colorado, Nebraska, Wyoming, and South Dakota

    NASA Technical Reports Server (NTRS)

    Martinko, E. A. (Principal Investigator); Poracsky, J.; Kipp, E. R.; Krieger, H.

    1981-01-01

    Crop calendars for 1979 and 1980 were investigated in support of an effort to develop techniques for mapping the High Plains aquifer region. Optimal LANDSAT image dates for 1980 were preliminarily identified based on ESS weekly crop weather reports and 1979 ESS agricultural statistics were entered into the computer. A questionnaire was compiled and sent to ASCS county agents with the approval of the Extension Directors in each state involved. Data from returning questionnaires were tabulated and development started on a set of computer programs to allow the preparation of computer assisted graphic displays of much of the collected data.

  9. Computational methods in metabolic engineering for strain design.

    PubMed

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Changing computing paradigms towards power efficiency

    PubMed Central

    Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro

    2014-01-01

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  11. Daily Magnetograms for 1978 from the AFGL (Air Force Geophysics Laboratory) Network.

    DTIC Science & Technology

    1985-01-08

    school student , carried out much of the computer work that produced the edited data shown in these plots and who lost his life in October 1982 while...dedicated efforts of a group of college and high-school students . Armand Paboojian supervised and personally undertook much of the production of the...F. IDevane. S.I., directed the contractual effort of Boston College under which most of the students worked. Ri" Laperriere carried out a large

  12. High Performance, Dependable Multiprocessor

    NASA Technical Reports Server (NTRS)

    Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric; hide

    2006-01-01

    With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.

  13. Reward Merit with Praise.

    ERIC Educational Resources Information Center

    Andrews, Hans A.

    1987-01-01

    Describes the efforts of two educational institutions to reward teaching excellence using positive feedback rather than merit pay incentives. An Arizona district, drawing on Herzberg's motivation theories, offers highly individualized rewards ranging from computers to conference money, while an Illinois community college bestows engraved plaques…

  14. LES, DNS and RANS for the analysis of high-speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Taulbee, Dale B.; Adumitroaie, Virgil; Sabini, George J.; Shieh, Geoffrey S.

    1994-01-01

    The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES), direct numerical simulation (DNS), and Reynolds averaged Navier Stokes (RANS) methods for the computational analysis of high-speed reacting turbulent flows. In the second phase of this work, covering the period 1 Sep. 1993 - 1 Sep. 1994, we have focused our efforts on two research problems: (1) developments of 'algebraic' moment closures for statistical descriptions of nonpremixed reacting systems, and (2) assessments of the Dirichlet frequency in presumed scalar probability density function (PDF) methods in stochastic description of turbulent reacting flows. This report provides a complete description of our efforts during this past year as supported by the NASA Langley Research Center under Grant NAG1-1122.

  15. Special issue of Computers and Fluids in honor of Cecil E. (Chuck) Leith

    DOE PAGES

    Zhou, Ye; Herring, Jackson

    2017-05-12

    Here, this special issue of Computers and Fluids is dedicated to Cecil E. (Chuck) Leith in honor of his research contributions, leadership in the areas of statistical fluid mechanics, computational fluid dynamics, and climate theory. Leith's contribution to these fields emerged from his interest in solving complex fluid flow problems--even those at high Mach numbers--in an era well before large scale supercomputing became the dominant mode of inquiry into these fields. Yet the issues raised and solved by his research effort are still of vital interest today.

  16. Special issue of Computers and Fluids in honor of Cecil E. (Chuck) Leith

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ye; Herring, Jackson

    Here, this special issue of Computers and Fluids is dedicated to Cecil E. (Chuck) Leith in honor of his research contributions, leadership in the areas of statistical fluid mechanics, computational fluid dynamics, and climate theory. Leith's contribution to these fields emerged from his interest in solving complex fluid flow problems--even those at high Mach numbers--in an era well before large scale supercomputing became the dominant mode of inquiry into these fields. Yet the issues raised and solved by his research effort are still of vital interest today.

  17. Using Computational Toxicology to Enable Risk-Based ...

    EPA Pesticide Factsheets

    presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment. Slide presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment.

  18. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.

    1990-01-01

    This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.

  19. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  20. A Systematic Approach to Improving E-Learning Implementations in High Schools

    ERIC Educational Resources Information Center

    Pardamean, Bens; Suparyanto, Teddy

    2014-01-01

    This study was based on the current growing trend of implementing e-learning in high schools. Most endeavors have been inefficient, rendering an objective of determining the initial steps that could be taken to improve these efforts by assessing a student population's computer skill levels and performances in an IT course. Demographic factors were…

  1. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  2. Cyber Technology for Materials and Structures in Aeronautics and Aerospace

    NASA Technical Reports Server (NTRS)

    Pipes, R. Byron

    1999-01-01

    This report summarizes efforts undertaken during the 1998-99 program year and includes a survey of the field of computational mechanics, a discussion of biomimetics and intelligent simulation, a survey of the field of biomimetics, an illustration of biomimetics and computational mechanics through the example of the high performance composite tensile structure. In addition, the preliminary results of a state-of-the art survey of composite materials technology is presented.

  3. AstroGrid: Taverna in the Virtual Observatory .

    NASA Astrophysics Data System (ADS)

    Benson, K. M.; Walton, N. A.

    This paper reports on the implementation of the Taverna workbench by AstroGrid, a tool for designing and executing workflows of tasks in the Virtual Observatory. The workflow approach helps astronomers perform complex task sequences with little technical effort. Visual approach to workflow construction streamlines highly complex analysis over public and private data and uses computational resources as minimal as a desktop computer. Some integration issues and future work are discussed in this article.

  4. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  5. Biologically Relevant Exposure Science for 21st Century Toxicity Testing

    EPA Science Inventory

    High visibility efforts in toxicity testing and computational toxicology including the recent NRC report, Toxicity Testing in the 21st Century: a Vision and Strategy (NRC, 2007), raise important research questions and opportunities for the field of exposure science. The authors ...

  6. High-Tech Conservation: Information-Age Tools Have Revolutionized the Work of Ecologists.

    ERIC Educational Resources Information Center

    Chiles, James R.

    1992-01-01

    Describes a new direction for conservation efforts influenced by the advance of the information age and the introduction of many technologically sophisticated information collecting devices. Devices include microscopic computer chips, miniature electronic components, and Earth-observation satellite. (MCO)

  7. Current state and future direction of computer systems at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  8. Six Years of Parallel Computing at NAS (1987 - 1993): What Have we Learned?

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In the fall of 1987 the age of parallelism at NAS began with the installation of a 32K processor CM-2 from Thinking Machines. In 1987 this was described as an "experiment" in parallel processing. In the six years since, NAS acquired a series of parallel machines, and conducted an active research and development effort focused on the use of highly parallel machines for applications in the computational aerosciences. In this time period parallel processing for scientific applications evolved from a fringe research topic into the one of main activities at NAS. In this presentation I will review the history of parallel computing at NAS in the context of the major progress, which has been made in the field in general. I will attempt to summarize the lessons we have learned so far, and the contributions NAS has made to the state of the art. Based on these insights I will comment on the current state of parallel computing (including the HPCC effort) and try to predict some trends for the next six years.

  9. Wind tunnel wall interference

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Mineck, Raymond E.; Barnwell, Richard W.; Kemp, William B., Jr.

    1986-01-01

    About a decade ago, interest in alleviating wind tunnel wall interference was renewed by advances in computational aerodynamics, concepts of adaptive test section walls, and plans for high Reynolds number transonic test facilities. Selection of NASA Langley cryogenic concept for the National Transonic Facility (NTF) tended to focus the renewed wall interference efforts. A brief overview and current status of some Langley sponsored transonic wind tunnel wall interference research are presented. Included are continuing efforts in basic wall flow studies, wall interference assessment/correction procedures, and adaptive wall technology.

  10. Issues in undergraduate education in computational science and high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchioro, T.L. II; Martin, D.

    1994-12-31

    The ever increasing need for mathematical and computational literacy within their society and among members of the work force has generated enormous pressure to revise and improve the teaching of related subjects throughout the curriculum, particularly at the undergraduate level. The Calculus Reform movement is perhaps the best known example of an organized initiative in this regard. The UCES (Undergraduate Computational Engineering and Science) project, an effort funded by the Department of Energy and administered through the Ames Laboratory, is sponsoring an informal and open discussion of the salient issues confronting efforts to improve and expand the teaching of computationalmore » science as a problem oriented, interdisciplinary approach to scientific investigation. Although the format is open, the authors hope to consider pertinent questions such as: (1) How can faculty and research scientists obtain the recognition necessary to further excellence in teaching the mathematical and computational sciences? (2) What sort of educational resources--both hardware and software--are needed to teach computational science at the undergraduate level? Are traditional procedural languages sufficient? Are PCs enough? Are massively parallel platforms needed? (3) How can electronic educational materials be distributed in an efficient way? Can they be made interactive in nature? How should such materials be tied to the World Wide Web and the growing ``Information Superhighway``?« less

  11. The SGI/CRAY T3E: Experiences and Insights

    NASA Technical Reports Server (NTRS)

    Bernard, Lisa Hamet

    1999-01-01

    The focus of the HPCC Earth and Space Sciences (ESS) Project is capability computing - pushing highly scalable computing testbeds to their performance limits. The drivers of this focus are the Grand Challenge problems in Earth and space science: those that could not be addressed in a capacity computing environment where large jobs must continually compete for resources. These Grand Challenge codes require a high degree of communication, large memory, and very large I/O (throughout the duration of the processing, not just in loading initial conditions and saving final results). This set of parameters led to the selection of an SGI/Cray T3E as the current ESS Computing Testbed. The T3E at the Goddard Space Flight Center is a unique computational resource within NASA. As such, it must be managed to effectively support the diverse research efforts across the NASA research community yet still enable the ESS Grand Challenge Investigator teams to achieve their performance milestones, for which the system was intended. To date, all Grand Challenge Investigator teams have achieved the 10 GFLOPS milestone, eight of nine have achieved the 50 GFLOPS milestone, and three have achieved the 100 GFLOPS milestone. In addition, many technical papers have been published highlighting results achieved on the NASA T3E, including some at this Workshop. The successes enabled by the NASA T3E computing environment are best illustrated by the 512 PE upgrade funded by the NASA Earth Science Enterprise earlier this year. Never before has an HPCC computing testbed been so well received by the general NASA science community that it was deemed critical to the success of a core NASA science effort. NASA looks forward to many more success stories before the conclusion of the NASA-SGI/Cray cooperative agreement in June 1999.

  12. High-throughput search for caloric materials: the CaloriCool approach

    NASA Astrophysics Data System (ADS)

    Zarkevich, N. A.; Johnson, D. D.; Pecharsky, V. K.

    2018-01-01

    The high-throughput search paradigm adopted by the newly established caloric materials consortium—CaloriCool®—with the goal to substantially accelerate discovery and design of novel caloric materials is briefly discussed. We begin with describing material selection criteria based on known properties, which are then followed by heuristic fast estimates, ab initio calculations, all of which has been implemented in a set of automated computational tools and measurements. We also demonstrate how theoretical and computational methods serve as a guide for experimental efforts by considering a representative example from the field of magnetocaloric materials.

  13. High-throughput search for caloric materials: the CaloriCool approach

    DOE PAGES

    Zarkevich, Nikolai A.; Johnson, Duane D.; Pecharsky, V. K.

    2017-12-13

    The high-throughput search paradigm adopted by the newly established caloric materials consortium—CaloriCool ®—with the goal to substantially accelerate discovery and design of novel caloric materials is briefly discussed. Here, we begin with describing material selection criteria based on known properties, which are then followed by heuristic fast estimates, ab initio calculations, all of which has been implemented in a set of automated computational tools and measurements. We also demonstrate how theoretical and computational methods serve as a guide for experimental efforts by considering a representative example from the field of magnetocaloric materials.

  14. Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Yamamoto, Kazuomi

    2012-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.

  15. Massive parallelization of serial inference algorithms for a complex generalized linear model

    PubMed Central

    Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David

    2014-01-01

    Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363

  16. Changing computing paradigms towards power efficiency.

    PubMed

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  17. Aircraft integrated design and analysis: A classroom experience

    NASA Technical Reports Server (NTRS)

    1988-01-01

    AAE 451 is the capstone course required of all senior undergraduates in the School of Aeronautics and Astronautics at Purdue University. During the past year the first steps of a long evolutionary process were taken to change the content and expectations of this course. These changes are the result of the availability of advanced computational capabilities and sophisticated electronic media availability at Purdue. This presentation will describe both the long range objectives and this year's experience using the High Speed Commercial Transport (HSCT) design, the AIAA Long Duration Aircraft design and a Remotely Piloted Vehicle (RPV) design proposal as project objectives. The central goal of these efforts was to provide a user-friendly, computer-software-based, environment to supplement traditional design course methodology. The Purdue University Computer Center (PUCC), the Engineering Computer Network (ECN), and stand-alone PC's were used for this development. This year's accomplishments centered primarily on aerodynamics software obtained from the NASA Langley Research Center and its integration into the classroom. Word processor capability for oral and written work and computer graphics were also blended into the course. A total of 10 HSCT designs were generated, ranging from twin-fuselage and forward-swept wing aircraft, to the more traditional delta and double-delta wing aircraft. Four Long Duration Aircraft designs were submitted, together with one RPV design tailored for photographic surveillance. Supporting these activities were three video satellite lectures beamed from NASA/Langley to Purdue. These lectures covered diverse areas such as an overview of HSCT design, supersonic-aircraft stability and control, and optimization of aircraft performance. Plans for next year's effort will be reviewed, including dedicated computer workstation utilization, remote satellite lectures, and university/industrial cooperative efforts.

  18. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  19. Use of Massive Parallel Computing Libraries in the Context of Global Gravity Field Determination from Satellite Data

    NASA Astrophysics Data System (ADS)

    Brockmann, J. M.; Schuh, W.-D.

    2011-07-01

    The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.

  20. Shor's factoring algorithm and modern cryptography. An illustration of the capabilities inherent in quantum computers

    NASA Astrophysics Data System (ADS)

    Gerjuoy, Edward

    2005-06-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.

  1. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  2. Data Storage Hierarchy Systems for Data Base Computers

    DTIC Science & Technology

    1979-08-01

    Thesis Supervisor Accepted by ................................................ Chairman, Department Committee - /-111 Report...failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE AUG 1979 2. REPORT...with very large capacity and small access time. As part of the INFOPLEX research effort, this thesis is focused on the study of high performance, highly

  3. Cooperative high-performance storage in the accelerated strategic computing initiative

    NASA Technical Reports Server (NTRS)

    Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark

    1996-01-01

    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.

  4. Proposed Directions for Research in Computer-Based Education.

    ERIC Educational Resources Information Center

    Waugh, Michael L.

    Several directions for potential research efforts in the field of computer-based education (CBE) are discussed. (For the purposes of this paper, CBE is defined as any use of computers to promote learning with no intended inference as to the specific nature or organization of the educational application under discussion.) Efforts should be directed…

  5. Reprocessing Multiyear GPS Data from Continuously Operating Reference Stations on Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Yoon, S.

    2016-12-01

    To define geodetic reference frame using GPS data collected by Continuously Operating Reference Stations (CORS) network, historical GPS data needs to be reprocessed regularly. Reprocessing GPS data collected by upto 2000 CORS sites for the last two decades requires a lot of computational resource. At National Geodetic Survey (NGS), there has been one completed reprocessing in 2011, and currently, the second reprocessing is undergoing. For the first reprocessing effort, in-house computing resource was utilized. In the current second reprocessing effort, outsourced cloud computing platform is being utilized. In this presentation, the outline of data processing strategy at NGS is described as well as the effort to parallelize the data processing procedure in order to maximize the benefit of the cloud computing. The time and cost savings realized by utilizing cloud computing approach will also be discussed.

  6. Costa - Introduction to 2015 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, James E.

    In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less

  7. Incentive motivation deficits in schizophrenia reflect effort computation impairments during cost-benefit decision-making.

    PubMed

    Fervaha, Gagan; Graff-Guerrero, Ariel; Zakzanis, Konstantine K; Foussias, George; Agid, Ofer; Remington, Gary

    2013-11-01

    Motivational impairments are a core feature of schizophrenia and although there are numerous reports studying this feature using clinical rating scales, objective behavioural assessments are lacking. Here, we use a translational paradigm to measure incentive motivation in individuals with schizophrenia. Sixteen stable outpatients with schizophrenia and sixteen matched healthy controls completed a modified version of the Effort Expenditure for Rewards Task that accounts for differences in motoric ability. Briefly, subjects were presented with a series of trials where they may choose to expend a greater amount of effort for a larger monetary reward versus less effort for a smaller reward. Additionally, the probability of receiving money for a given trial was varied at 12%, 50% and 88%. Clinical and other reward-related variables were also evaluated. Patients opted to expend greater effort significantly less than controls for trials of high, but uncertain (i.e. 50% and 88% probability) incentive value, which was related to amotivation and neurocognitive deficits. Other abnormalities were also noted but were related to different clinical variables such as impulsivity (low reward and 12% probability). These motivational deficits were not due to group differences in reward learning, reward valuation or hedonic capacity. Our findings offer novel support for incentive motivation deficits in schizophrenia. Clinical amotivation is associated with impairments in the computation of effort during cost-benefit decision-making. This objective translational paradigm may guide future investigations of the neural circuitry underlying these motivational impairments. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Markerless laser registration in image-guided oral and maxillofacial surgery.

    PubMed

    Marmulla, Rüdiger; Lüth, Tim; Mühling, Joachim; Hassfeld, Stefan

    2004-07-01

    The use of registration markers in computer-assisted surgery is combined with high logistic costs and efforts. Markerless patient registration using laser scan surface registration techniques is a new challenging method. The present study was performed to evaluate the clinical accuracy in finding defined target points within the surgical site after markerless patient registration in image-guided oral and maxillofacial surgery. Twenty consecutive patients with different cranial diseases were scheduled for computer-assisted surgery. Data set alignment between the surgical site and the computed tomography (CT) data set was performed by markerless laser scan surface registration of the patient's face. Intraoral rigidly attached registration markers were used as target points, which had to be detected by an infrared pointer. The Surgical Segment Navigator SSN++ has been used for all procedures. SSN++ is an investigative product based on the SSN system that had previously been developed by the presenting authors with the support of Carl Zeiss (Oberkochen, Germany). SSN++ is connected to a Polaris infrared camera (Northern Digital, Waterloo, Ontario, Canada) and to a Minolta VI 900 3D digitizer (Tokyo, Japan) for high-resolution laser scanning. Minimal differences in shape between the laser scan surface and the surface generated from the CT data set could be detected. Nevertheless, high-resolution laser scan of the skin surface allows for a precise patient registration (mean deviation 1.1 mm, maximum deviation 1.8 mm). Radiation load, logistic costs, and efforts arising from the planning of computer-assisted surgery of the head can be reduced because native (markerless) CT data sets can be used for laser scan-based surface registration.

  9. Multiprocessor architecture: Synthesis and evaluation

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1990-01-01

    Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.

  10. Technology and Sexuality--What's the Connection? Addressing Youth Sexualities in Efforts to Increase Girls' Participation in Computing

    ERIC Educational Resources Information Center

    Ashcraft, Catherine

    2015-01-01

    To date, girls and women are significantly underrepresented in computer science and technology. Concerns about this underrepresentation have sparked a wealth of educational efforts to promote girls' participation in computing, but these programs have demonstrated limited impact on reversing current trends. This paper argues that this is, in part,…

  11. Detonation product EOS studies: Using ISLS to refine CHEETAH

    NASA Astrophysics Data System (ADS)

    Zaug, Joseph; Fried, Larry; Hansen, Donald

    2001-06-01

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a suite of non-ideal simple fluids and fluid mixtures. Impulsive Stimulated Light Scattering conducted in the diamond-anvil cell offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model CHEETAH. Computational models are systematically improved with each addition of experimental data. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.

  12. Application of Multi-Frequency Modulation (MFM) for High-Speed Data Communications to a Voice Frequency Channel

    DTIC Science & Technology

    1990-06-01

    reader is cautioned that computer programs developed in this research may not have been exercised for all cases of interest. While every effort has been...Source of Funding Numbers _. Program Element No Project No I Task No I Work Unit Accession No 11 Title (Include security classflcation) APPLICATION OF...formats. Previous applications of these encoding formats were on industry standard computers (PC) over a 16-20 klIz channel. This report discusses the

  13. Technology advances and market forces: Their impact on high performance architectures

    NASA Technical Reports Server (NTRS)

    Best, D. R.

    1978-01-01

    Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.

  14. Geoinformatics 2008 - Data to Knowledge

    USGS Publications Warehouse

    Brady, Shailaja R.; Sinha, A. Krishna; Gundersen, Linda C.

    2008-01-01

    Geoinformatics is the term used to describe a variety of efforts to promote collaboration between the computer sciences and the geosciences to solve complex scientific questions. It refers to the distributed, integrated digital information system and working environment that provides innovative means for the study of the Earth systems, as well as other planets, through use of advanced information technologies. Geoinformatics activities range from major research and development efforts creating new technologies to provide high-quality, sustained production-level services for data discovery, integration and analysis, to small, discipline-specific efforts that develop earth science data collections and data analysis tools serving the needs of individual communities. The ultimate vision of Geoinformatics is a highly interconnected data system populated with high quality, freely available data, as well as, a robust set of software for analysis, visualization, and modeling. This volume is a collection of extended abstracts for oral papers presented at the Geoinformatics 2008 conference, June 11 and 13, 2008, in Potsdam, Germany.

  15. Reasoning with Atomic-Scale Molecular Dynamic Models

    ERIC Educational Resources Information Center

    Pallant, Amy; Tinker, Robert F.

    2004-01-01

    The studies reported in this paper are an initial effort to explore the applicability of computational models in introductory science learning. Two instructional interventions are described that use a molecular dynamics model embedded in a set of online learning activities with middle and high school students in 10 classrooms. The studies indicate…

  16. Pilot-in-the-Loop CFD Method Development

    DTIC Science & Technology

    2014-06-16

    CFD analysis. Coupled simulations will be run at PSU on the COCOA -4 cluster, a high performance computing cluster. The CRUNCH CFD software has...been installed on the COCOA -4 servers and initial software tests are being conducted. Initial efforts will use the Generic Frigate Shape SFS-2 to

  17. Defense Acquisitions: DOD Efforts to Adopt Open Systems for Its Unmanned Aircraft Systems Have Progressed Slowly

    DTIC Science & Technology

    2013-07-01

    applications introduced by third-party developers to connect to the Android operating system through an open software interface. This allows customers...Definition Multimedia Interface have been developed to address the need for standards for high-definition televisions and computer monitors. Perhaps

  18. Assistive Technology Developments in Puerto Rico.

    ERIC Educational Resources Information Center

    Lizama, Mauricio A.; Mendez, Hector L.

    Recent efforts to develop Spanish-based adaptations for alternate computer input devices are considered, as are their implications for Hispanics with disabilities and for the development of language sensitive devices worldwide. Emphasis is placed on the particular need to develop low-cost high technology devices for Puerto Rico and Latin America…

  19. Public-Private Consortium Aims to Cut Preclinical Cancer Drug Discovery from Six Years to Just One | Frederick National Laboratory for Cancer Research

    Cancer.gov

    Scientists from two U.S. national laboratories, industry, and academia today launched an unprecedented effort to transform the way cancer drugs are discovered by creating an open and sharable platform that integrates high-performance computing, share

  20. Speech Perception as a Cognitive Process: The Interactive Activation Model.

    ERIC Educational Resources Information Center

    Elman, Jeffrey L.; McClelland, James L.

    Research efforts to model speech perception in terms of a processing system in which knowledge and processing are distributed over large numbers of highly interactive--but computationally primative--elements are described in this report. After discussing the properties of speech that demand a parallel interactive processing system, the report…

  1. A Survey of New Students in Spring 1997.

    ERIC Educational Resources Information Center

    Glyer-Culver, Betty, M.

    In fall 1996, California's Los Rios Community College District (LRCCD) undertook several initiatives to increase enrollment, including the construction of state-of-the-art computer laboratories to meet demands for high tech training and the expansion of course offerings to non-traditional times. To evaluate the effectiveness of these efforts,…

  2. Techniques for Enhancing Web-Based Education.

    ERIC Educational Resources Information Center

    Barbieri, Kathy; Mehringer, Susan

    The Virtual Workshop is a World Wide Web-based set of modules on high performance computing developed at the Cornell Theory Center (CTC) (New York). This approach reaches a large audience, leverages staff effort, and poses challenges for developing interesting presentation techniques. This paper describes the following techniques with their…

  3. Comparing the Robustness of High-Frequency Traveling-Wave Tube Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Chevalier, Christine T.; Wilson, Jeffrey D.; Kory, Carol L.

    2007-01-01

    A three-dimensional electromagnetic field simulation software package was used to compute the cold-test parameters, phase velocity, on-axis interaction impedance, and attenuation, for several high-frequency traveling-wave tube slow-wave circuit geometries. This research effort determined the effects of variations in circuit dimensions on cold-test performance. The parameter variations were based on the tolerances of conventional micromachining techniques.

  4. Progress on the Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha

    2015-12-01

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.

  5. Multidisciplinary propulsion simulation using NPSS

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Evans, Austin L.; Follen, Gregory J.

    1992-01-01

    The current status of the Numerical Propulsion System Simulation (NPSS) program, a cooperative effort of NASA, industry, and universities to reduce the cost and time of advanced technology propulsion system development, is reviewed. The technologies required for this program include (1) interdisciplinary analysis to couple the relevant disciplines, such as aerodynamics, structures, heat transfer, combustion, acoustics, controls, and materials; (2) integrated systems analysis; (3) a high-performance computing platform, including massively parallel processing; and (4) a simulation environment providing a user-friendly interface. Several research efforts to develop these technologies are discussed.

  6. Turbocharged molecular discovery of OLED emitters: from high-throughput quantum simulation to highly efficient TADF devices

    NASA Astrophysics Data System (ADS)

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.

    2016-09-01

    Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.

  7. Molecular Dynamics of Hot Dense Plasmas: New Horizons

    NASA Astrophysics Data System (ADS)

    Graziani, Frank

    2011-10-01

    We describe the status of a new time-dependent simulation capability for hot dense plasmas. The backbone of this multi-institutional computational and experimental effort--the Cimarron Project--is the massively parallel molecular dynamics (MD) code ``ddcMD''. The project's focus is material conditions such as exist in inertial confinement fusion experiments, and in many stellar interiors: high temperatures, high densities, significant electromagnetic fields, mixtures of high- and low- Zelements, and non-Maxwellian particle distributions. Of particular importance is our ability to incorporate into this classical MD code key atomic, radiative, and nuclear processes, so that their interacting effects under non-ideal plasma conditions can be investigated. This talk summarizes progress in computational methodology, discusses strengths and weaknesses of quantum statistical potentials as effective interactions for MD, explains the model used for quantum events possibly occurring in a collision and highlights some significant results obtained to date. We describe the status of a new time-dependent simulation capability for hot dense plasmas. The backbone of this multi-institutional computational and experimental effort--the Cimarron Project--is the massively parallel molecular dynamics (MD) code ``ddcMD''. The project's focus is material conditions such as exist in inertial confinement fusion experiments, and in many stellar interiors: high temperatures, high densities, significant electromagnetic fields, mixtures of high- and low- Zelements, and non-Maxwellian particle distributions. Of particular importance is our ability to incorporate into this classical MD code key atomic, radiative, and nuclear processes, so that their interacting effects under non-ideal plasma conditions can be investigated. This talk summarizes progress in computational methodology, discusses strengths and weaknesses of quantum statistical potentials as effective interactions for MD, explains the model used for quantum events possibly occurring in a collision and highlights some significant results obtained to date. This work is performed under the auspices of the U. S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  8. Design of a fault-tolerant reversible control unit in molecular quantum-dot cellular automata

    NASA Astrophysics Data System (ADS)

    Bahadori, Golnaz; Houshmand, Monireh; Zomorodi-Moghadam, Mariam

    Quantum-dot cellular automata (QCA) is a promising emerging nanotechnology that has been attracting considerable attention due to its small feature size, ultra-low power consuming, and high clock frequency. Therefore, there have been many efforts to design computational units based on this technology. Despite these advantages of the QCA-based nanotechnologies, their implementation is susceptible to a high error rate. On the other hand, using the reversible computing leads to zero bit erasures and no energy dissipation. As the reversible computation does not lose information, the fault detection happens with a high probability. In this paper, first we propose a fault-tolerant control unit using reversible gates which improves on the previous design. The proposed design is then synthesized to the QCA technology and is simulated by the QCADesigner tool. Evaluation results indicate the performance of the proposed approach.

  9. Computational analysis of forebody tangential slot blowing

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Agosta-Greenman, Roxana M.; Rizk, Yehia M.; Schiff, Lewis B.; Cummings, Russell M.

    1994-01-01

    An overview of the computational effort to analyze forebody tangential slot blowing is presented. Tangential slot blowing generates side force and yawing moment which may be used to control an aircraft flying at high-angle-of-attack. Two different geometries are used in the analysis: (1) The High Alpha Research Vehicle; and (2) a generic chined forebody. Computations using the isolated F/A-18 forebody are obtained at full-scale wind tunnel test conditions for direct comparison with available experimental data. The effects of over- and under-blowing on force and moment production are analyzed. Time-accurate solutions using the isolated forebody are obtained to study the force onset timelag of tangential slot blowing. Computations using the generic chined forebody are obtained at experimental wind tunnel conditions, and the results compared with available experimental data. This computational analysis compliments the experimental results and provides a detailed understanding of the effects of tangential slot blowing on the flow field about simple and complex geometries.

  10. Virtual reality neurosurgery: a simulator blueprint.

    PubMed

    Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J

    2004-04-01

    This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.

  11. Strategic directions of computing at Fermilab

    NASA Astrophysics Data System (ADS)

    Wolbers, Stephen

    1998-05-01

    Fermilab computing has changed a great deal over the years, driven by the demands of the Fermilab experimental community to record and analyze larger and larger datasets, by the desire to take advantage of advances in computing hardware and software, and by the advances coming from the R&D efforts of the Fermilab Computing Division. The strategic directions of Fermilab Computing continue to be driven by the needs of the experimental program. The current fixed-target run will produce over 100 TBytes of raw data and systems must be in place to allow the timely analysis of the data. The collider run II, beginning in 1999, is projected to produce of order 1 PByte of data per year. There will be a major change in methodology and software language as the experiments move away from FORTRAN and into object-oriented languages. Increased use of automation and the reduction of operator-assisted tape mounts will be required to meet the needs of the large experiments and large data sets. Work will continue on higher-rate data acquisition systems for future experiments and projects. R&D projects will be pursued as necessary to provide software, tools, or systems which cannot be purchased or acquired elsewhere. A closer working relation with other high energy laboratories will be pursued to reduce duplication of effort and to allow effective collaboration on many aspects of HEP computing.

  12. Temperature Distribution Within a Defect-Free Silicon Carbide Diode Predicted by a Computational Model

    NASA Technical Reports Server (NTRS)

    Kuczmarski, Maria A.; Neudeck, Philip G.

    2000-01-01

    Most solid-state electronic devices diodes, transistors, and integrated circuits are based on silicon. Although this material works well for many applications, its properties limit its ability to function under extreme high-temperature or high-power operating conditions. Silicon carbide (SiC), with its desirable physical properties, could someday replace silicon for these types of applications. A major roadblock to realizing this potential is the quality of SiC material that can currently be produced. Semiconductors require very uniform, high-quality material, and commercially available SiC tends to suffer from defects in the crystalline structure that have largely been eliminated in silicon. In some power circuits, these defects can focus energy into an extremely small area, leading to overheating that can damage the device. In an effort to better understand the way that these defects affect the electrical performance and reliability of an SiC device in a power circuit, the NASA Glenn Research Center at Lewis Field began an in-house three-dimensional computational modeling effort. The goal is to predict the temperature distributions within a SiC diode structure subjected to the various transient overvoltage breakdown stresses that occur in power management circuits. A commercial computational fluid dynamics computer program (FLUENT-Fluent, Inc., Lebanon, New Hampshire) was used to build a model of a defect-free SiC diode and generate a computational mesh. A typical breakdown power density was applied over 0.5 msec in a heated layer at the junction between the p-type SiC and n-type SiC, and the temperature distribution throughout the diode was then calculated. The peak temperature extracted from the computational model agreed well (within 6 percent) with previous first-order calculations of the maximum expected temperature at the end of the breakdown pulse. This level of agreement is excellent for a model of this type and indicates that three-dimensional computational modeling can provide useful predictions for this class of problem. The model is now being extended to include the effects of crystal defects. The model will provide unique insights into how high the temperature rises in the vicinity of the defects in a diode at various power densities and pulse durations. This information also will help researchers in understanding and designing SiC devices for safe and reliable operation in high-power circuits.

  13. Isolated Open Rotor Noise Prediction Assessment Using the F31A31 Historical Blade Set

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, William T.; Boyd, D. Douglas, Jr.; Zawodny, Nikolas S.

    2016-01-01

    In an effort to mitigate next-generation fuel efficiency and environmental impact concerns for aviation, open rotor propulsion systems have received renewed interest. However, maintaining the high propulsive efficiency while simultaneously meeting noise goals has been one of the challenges in making open rotor propulsion a viable option. Improvements in prediction tools and design methodologies have opened the design space for next generation open rotor designs that satisfy these challenging objectives. As such, validation of aerodynamic and acoustic prediction tools has been an important aspect of open rotor research efforts. This paper describes validation efforts of a combined computational fluid dynamics and Ffowcs Williams and Hawkings equation methodology for open rotor aeroacoustic modeling. Performance and acoustic predictions were made for a benchmark open rotor blade set and compared with measurements over a range of rotor speeds and observer angles. Overall, the results indicate that the computational approach is acceptable for assessing low-noise open rotor designs. Additionally, this approach may be used to provide realistic incident source fields for acoustic shielding/scattering studies on various aircraft configurations.

  14. Biocellion: accelerating computer simulation of multicellular biological system models.

    PubMed

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. TEMPEST: A three-dimensional time-dependent computer program for hydrothermal analysis: Volume 2, Assessment and verification results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eyler, L L; Trent, D S; Budden, M J

    During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.

  16. 42 CFR 441.182 - Maintenance of effort: Computation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES Inpatient Psychiatric Services for Individuals Under Age 21 in Psychiatric Facilities or Programs § 441.182 Maintenance of effort: Computation. (a) For expenditures for inpatient psychiatric services... total State Medicaid expenditures in the current quarter for inpatient psychiatric services and...

  17. An accurate binding interaction model in de novo computational protein design of interactions: if you build it, they will bind.

    PubMed

    London, Nir; Ambroggio, Xavier

    2014-02-01

    Computational protein design efforts aim to create novel proteins and functions in an automated manner and, in the process, these efforts shed light on the factors shaping natural proteins. The focus of these efforts has progressed from the interior of proteins to their surface and the design of functions, such as binding or catalysis. Here we examine progress in the development of robust methods for the computational design of non-natural interactions between proteins and molecular targets such as other proteins or small molecules. This problem is referred to as the de novo computational design of interactions. Recent successful efforts in de novo enzyme design and the de novo design of protein-protein interactions open a path towards solving this problem. We examine the common themes in these efforts, and review recent studies aimed at understanding the nature of successes and failures in the de novo computational design of interactions. While several approaches culminated in success, the use of a well-defined structural model for a specific binding interaction in particular has emerged as a key strategy for a successful design, and is therefore reviewed with special consideration. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Python in the NERSC Exascale Science Applications Program for Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack

    We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codesmore » vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices« less

  19. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  20. Probabilistic brains: knowns and unknowns

    PubMed Central

    Pouget, Alexandre; Beck, Jeffrey M; Ma, Wei Ji; Latham, Peter E

    2015-01-01

    There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference. PMID:23955561

  1. Unsteady Three-Dimensional Simulation of a Shear Coaxial GO2/GH2 Rocket Injector with RANS and Hybrid-RAN-LES/DES Using Flamelet Models

    NASA Technical Reports Server (NTRS)

    Westra, Doug G.; West, Jeffrey S.; Richardson, Brian R.

    2015-01-01

    Historically, the analysis and design of liquid rocket engines (LREs) has relied on full-scale testing and one-dimensional empirical tools. The testing is extremely expensive and the one-dimensional tools are not designed to capture the highly complex, and multi-dimensional features that are inherent to LREs. Recent advances in computational fluid dynamics (CFD) tools have made it possible to predict liquid rocket engine performance, stability, to assess the effect of complex flow features, and to evaluate injector-driven thermal environments, to mitigate the cost of testing. Extensive efforts to verify and validate these CFD tools have been conducted, to provide confidence for using them during the design cycle. Previous validation efforts have documented comparisons of predicted heat flux thermal environments with test data for a single element gaseous oxygen (GO2) and gaseous hydrogen (GH2) injector. The most notable validation effort was a comprehensive validation effort conducted by Tucker et al. [1], in which a number of different groups modeled a GO2/GH2 single element configuration by Pal et al [2]. The tools used for this validation comparison employed a range of algorithms, from both steady and unsteady Reynolds Averaged Navier-Stokes (U/RANS) calculations, large-eddy simulations (LES), detached eddy simulations (DES), and various combinations. A more recent effort by Thakur et al. [3] focused on using a state-of-the-art CFD simulation tool, Loci/STREAM, on a two-dimensional grid. Loci/STREAM was chosen because it has a unique, very efficient flamelet parameterization of combustion reactions that are too computationally expensive to simulate with conventional finite-rate chemistry calculations. The current effort focuses on further advancement of validation efforts, again using the Loci/STREAM tool with the flamelet parameterization, but this time with a three-dimensional grid. Comparisons to the Pal et al. heat flux data will be made for both RANS and Hybrid RANSLES/ Detached Eddy simulations (DES). Computation costs will be reported, along with comparison of accuracy and cost to much less expensive two-dimensional RANS simulations of the same geometry.

  2. Attitudes and gender differences of high school seniors within one-to-one computing environments in South Dakota

    NASA Astrophysics Data System (ADS)

    Nelson, Mathew

    In today's age of exponential change and technological advancement, awareness of any gender gap in technology and computer science-related fields is crucial, but further research must be done in an effort to better understand the complex interacting factors contributing to the gender gap. This study utilized a survey to investigate specific gender differences relating to computing self-efficacy, computer usage, and environmental factors of exposure, personal interests, and parental influence that impact gender differences of high school students within a one-to-one computing environment in South Dakota. The population who completed the One-to-One High School Computing Survey for this study consisted of South Dakota high school seniors who had been involved in a one-to-one computing environment for two or more years. The data from the survey were analyzed using descriptive and inferential statistics for the determined variables. From the review of literature and data analysis several conclusions were drawn from the findings. Among them are that overall, there was very little difference in perceived computing self-efficacy and computing anxiety between male and female students within the one-to-one computing initiative. The study supported the current research that males and females utilized computers similarly, but males spent more time using their computers to play online games. Early exposure to computers, or the age at which the student was first exposed to a computer, and the number of computers present in the home (computer ownership) impacted computing self-efficacy. The results also indicated parental encouragement to work with computers also contributed positively to both male and female students' computing self-efficacy. Finally the study also found that both mothers and fathers encouraged their male children more than their female children to work with computing and pursue careers in computing science fields.

  3. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    NASA Astrophysics Data System (ADS)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  4. Irregular Applications: Architectures & Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, John T.; Villa, Oreste; Tumeo, Antonino

    Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.

  5. Strategy for reflector pattern calculation - Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.

    1986-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  6. Strategy for reflector pattern calculation: Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.

    1985-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  7. Energy Drain by Computers Stifles Efforts at Cost Control

    ERIC Educational Resources Information Center

    Keller, Josh

    2009-01-01

    The high price of storing and processing data is hurting colleges and universities across the country. In response, some institutions are embracing greener technologies to keep costs down and help the environment. But compared with other industries, colleges and universities have been slow to understand the problem and to adopt energy-saving…

  8. Musings on the Internet, Part 2

    ERIC Educational Resources Information Center

    Cerf, Vinton G.

    2004-01-01

    In t his article, the author discusses the role of higher education research and development (R&D)--particularly R&D into the issues and problems that industry is less able to explore. In addition to high-speed computer communication, broadband networking efforts, and the use of fiber, a rich service environment is equally important and is…

  9. IDEAL: Images Across Domains, Experiments, Algorithms and Learning

    NASA Astrophysics Data System (ADS)

    Ushizima, Daniela M.; Bale, Hrishikesh A.; Bethel, E. Wes; Ercius, Peter; Helms, Brett A.; Krishnan, Harinarayan; Grinberg, Lea T.; Haranczyk, Maciej; Macdowell, Alastair A.; Odziomek, Katarzyna; Parkinson, Dilworth Y.; Perciano, Talita; Ritchie, Robert O.; Yang, Chao

    2016-11-01

    Research across science domains is increasingly reliant on image-centric data. Software tools are in high demand to uncover relevant, but hidden, information in digital images, such as those coming from faster next generation high-throughput imaging platforms. The challenge is to analyze the data torrent generated by the advanced instruments efficiently, and provide insights such as measurements for decision-making. In this paper, we overview work performed by an interdisciplinary team of computational and materials scientists, aimed at designing software applications and coordinating research efforts connecting (1) emerging algorithms for dealing with large and complex datasets; (2) data analysis methods with emphasis in pattern recognition and machine learning; and (3) advances in evolving computer architectures. Engineering tools around these efforts accelerate the analyses of image-based recordings, improve reusability and reproducibility, scale scientific procedures by reducing time between experiments, increase efficiency, and open opportunities for more users of the imaging facilities. This paper describes our algorithms and software tools, showing results across image scales, demonstrating how our framework plays a role in improving image understanding for quality control of existent materials and discovery of new compounds.

  10. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution.

    PubMed

    Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn

    2013-03-06

    Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.

  11. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution

    PubMed Central

    2013-01-01

    Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171

  12. Fast and accurate genotype imputation in genome-wide association studies through pre-phasing

    PubMed Central

    Howie, Bryan; Fuchsberger, Christian; Stephens, Matthew; Marchini, Jonathan; Abecasis, Gonçalo R.

    2013-01-01

    Sequencing efforts, including the 1000 Genomes Project and disease-specific efforts, are producing large collections of haplotypes that can be used for genotype imputation in genome-wide association studies (GWAS). Imputing from these reference panels can help identify new risk alleles, but the use of large panels with existing methods imposes a high computational burden. To keep imputation broadly accessible, we introduce a strategy called “pre-phasing” that maintains the accuracy of leading methods while cutting computational costs by orders of magnitude. In brief, we first statistically estimate the haplotypes for each GWAS individual (“pre-phasing”) and then impute missing genotypes into these estimated haplotypes. This reduces the computational cost because: (i) the GWAS samples must be phased only once, whereas standard methods would implicitly re-phase with each reference panel update; (ii) it is much faster to match a phased GWAS haplotype to one reference haplotype than to match unphased GWAS genotypes to a pair of reference haplotypes. This strategy will be particularly valuable for repeated imputation as reference panels evolve. PMID:22820512

  13. Survey of the supporting research and technology for the thermal protection of the Galileo Probe

    NASA Technical Reports Server (NTRS)

    Howe, J. T.; Pitts, W. C.; Lundell, J. H.

    1981-01-01

    The Galileo Probe, which is scheduled to be launched in 1985 and to enter the hydrogen-helium atmosphere of Jupiter up to 1,475 days later, presents thermal protection problems that are far more difficult than those experienced in previous planetary entry missions. The high entry speed of the Probe will cause forebody heating rates orders of magnitude greater than those encountered in the Apollo and Pioneer Venus missions, severe afterbody heating from base-flow radiation, and thermochemical ablation rates for carbon phenolic that rival the free-stream mass flux. This paper presents a comprehensive survey of the experimental work and computational research that provide technological support for the Probe's heat-shield design effort. The survey includes atmospheric modeling; both approximate and first-principle computations of flow fields and heat-shield material response; base heating; turbulence modelling; new computational techniques; experimental heating and materials studies; code validation efforts; and a set of 'consensus' first-principle flow-field solutions through the entry maneuver, with predictions of the corresponding thermal protection requirements.

  14. Computational psychiatry

    PubMed Central

    Montague, P. Read; Dolan, Raymond J.; Friston, Karl J.; Dayan, Peter

    2013-01-01

    Computational ideas pervade many areas of science and have an integrative explanatory role in neuroscience and cognitive science. However, computational depictions of cognitive function have had surprisingly little impact on the way we assess mental illness because diseases of the mind have not been systematically conceptualized in computational terms. Here, we outline goals and nascent efforts in the new field of computational psychiatry, which seeks to characterize mental dysfunction in terms of aberrant computations over multiple scales. We highlight early efforts in this area that employ reinforcement learning and game theoretic frameworks to elucidate decision-making in health and disease. Looking forwards, we emphasize a need for theory development and large-scale computational phenotyping in human subjects. PMID:22177032

  15. Automated Design of Complex Dynamic Systems

    PubMed Central

    Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni

    2014-01-01

    Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969

  16. ``But it doesn't come naturally'': how effort expenditure shapes the benefit of growth mindset on women's sense of intellectual belonging in computing

    NASA Astrophysics Data System (ADS)

    Stout, Jane G.; Blaney, Jennifer M.

    2017-10-01

    Research suggests growth mindset, or the belief that knowledge is acquired through effort, may enhance women's sense of belonging in male-dominated disciplines, like computing. However, other research indicates women who spend a great deal of time and energy in technical fields experience a low sense of belonging. The current study assessed the benefits of a growth mindset on women's (and men's) sense of intellectual belonging in computing, accounting for the amount of time and effort dedicated to academics. We define "intellectual belonging" as the sense that one is believed to be a competent member of the community. Whereas a stronger growth mindset was associated with stronger intellectual belonging for men, a growth mindset only boosted women's intellectual belonging when they did not work hard on academics. Our findings suggest, paradoxically, women may not benefit from a growth mindset in computing when they exert a lot of effort.

  17. Predicting Motivation: Computational Models of PFC Can Explain Neural Coding of Motivation and Effort-based Decision-making in Health and Disease.

    PubMed

    Vassena, Eliana; Deraeve, James; Alexander, William H

    2017-10-01

    Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.

  18. Status of Computational Aerodynamic Modeling Tools for Aircraft Loss-of-Control

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Murphy, Patrick C.; Atkins, Harold L.; Viken, Sally A.; Petrilli, Justin L.; Gopalarathnam, Ashok; Paul, Ryan C.

    2016-01-01

    A concerted effort has been underway over the past several years to evolve computational capabilities for modeling aircraft loss-of-control under the NASA Aviation Safety Program. A principal goal has been to develop reliable computational tools for predicting and analyzing the non-linear stability & control characteristics of aircraft near stall boundaries affecting safe flight, and for utilizing those predictions for creating augmented flight simulation models that improve pilot training. Pursuing such an ambitious task with limited resources required the forging of close collaborative relationships with a diverse body of computational aerodynamicists and flight simulation experts to leverage their respective research efforts into the creation of NASA tools to meet this goal. Considerable progress has been made and work remains to be done. This paper summarizes the status of the NASA effort to establish computational capabilities for modeling aircraft loss-of-control and offers recommendations for future work.

  19. Intelligent Manufacturing of Commercial Optics Final Report CRADA No. TC-0313-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J. S.; Pollicove, H.

    The project combined the research and development efforts of LLNL and the University of Rochester Center for Manufacturing Optics (COM), to develop a new generation of flexible computer controlled optics· grinding machines. COM's principal near term development effort is to commercialize the OPTICAM-SM, a new prototype spherical grinding machine. A crucial requirement for commercializing the OPTICAM-SM is the development of a predictable and repeatable material removal process ( deterministic micro-grinding) that yields high quality surfaces that minimize non-deterministic polishing. OPTICAM machine tools and the fabrication process development studies are part of COM' s response to the DOD (ARPA) request tomore » implement a modernization strategy for revitalizing the U.S. optics manufacturing base. This project was entered into in order to develop a new generation of :flexible, computer-controlled optics grinding machines.« less

  20. A Synthesis of Hybrid RANS/LES CFD Results for F-16XL Aircraft Aerodynamics

    NASA Technical Reports Server (NTRS)

    Luckring, James M.; Park, Michael A.; Hitzel, Stephan M.; Jirasek, Adam; Lofthouse, Andrew J.; Morton, Scott A.; McDaniel, David R.; Rizzi, Arthur M.

    2015-01-01

    A synthesis is presented of recent numerical predictions for the F-16XL aircraft flow fields and aerodynamics. The computational results were all performed with hybrid RANS/LES formulations, with an emphasis on unsteady flows and subsequent aerodynamics, and results from five computational methods are included. The work was focused on one particular low-speed, high angle-of-attack flight test condition, and comparisons against flight-test data are included. This work represents the third coordinated effort using the F-16XL aircraft, and a unique flight-test data set, to advance our knowledge of slender airframe aerodynamics as well as our capability for predicting these aerodynamics with advanced CFD formulations. The prior efforts were identified as Cranked Arrow Wing Aerodynamics Project International, with the acronyms CAWAPI and CAWAPI-2. All information in this paper is in the public domain.

  1. An ultra-compact processor module based on the R3000

    NASA Astrophysics Data System (ADS)

    Mullenhoff, D. J.; Kaschmitter, J. L.; Lyke, J. C.; Forman, G. A.

    1992-08-01

    Viable high density packaging is of critical importance for future military systems, particularly space borne systems which require minimum weight and size and high mechanical integrity. A leading, emerging technology for high density packaging is multi-chip modules (MCM). During the 1980's, a number of different MCM technologies have emerged. In support of Strategic Defense Initiative Organization (SDIO) programs, Lawrence Livermore National Laboratory (LLNL) has developed, utilized, and evaluated several different MCM technologies. Prior LLNL efforts include modules developed in 1986, using hybrid wafer scale packaging, which are still operational in an Air Force satellite mission. More recent efforts have included very high density cache memory modules, developed using laser pantography. As part of the demonstration effort, LLNL and Phillips Laboratory began collaborating in 1990 in the Phase 3 Multi-Chip Module (MCM) technology demonstration project. The goal of this program was to demonstrate the feasibility of General Electric's (GE) High Density Interconnect (HDI) MCM technology. The design chosen for this demonstration was the processor core for a MIPS R3000 based reduced instruction set computer (RISC), which has been described previously. It consists of the R3000 microprocessor, R3010 floating point coprocessor and 128 Kbytes of cache memory.

  2. Transferable Calibration Standard Developed for Quantitative Raman Scattering Diagnostics in High-Pressure Flames

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet; Kojima, Jun

    2005-01-01

    Researchers from NASA Glenn Research Center s Combustion Branch and the Ohio Aerospace Institute (OAI) have developed a transferable calibration standard for an optical technique called spontaneous Raman scattering (SRS) in high-pressure flames. SRS is perhaps the only technique that provides spatially and temporally resolved, simultaneous multiscalar measurements in turbulent flames. Such measurements are critical for the validation of numerical models of combustion. This study has been a combined experimental and theoretical effort to develop a spectral calibration database for multiscalar diagnostics using SRS in high-pressure flames. However, in the past such measurements have used a one-of-a-kind experimental setup and a setup-dependent calibration procedure to empirically account for spectral interferences, or crosstalk, among the major species of interest. Such calibration procedures, being non-transferable, are prohibitively expensive to duplicate. A goal of this effort is to provide an SRS calibration database using transferable standards that can be implemented widely by other researchers for both atmospheric-pressure and high-pressure (less than 30 atm) SRS studies. A secondary goal of this effort is to provide quantitative multiscalar diagnostics in high pressure environments to validate computational combustion codes.

  3. Detonation Product EOS Studies: Using ISLS to Refine Cheetah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaug, J M; Howard, W M; Fried, L E

    2001-08-08

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition themore » kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Computational models are systematically improved with each addition of experimental data.« less

  4. Trust and Reciprocity: Are Effort and Money Equivalent?

    PubMed Central

    Vilares, Iris; Dam, Gregory; Kording, Konrad

    2011-01-01

    Trust and reciprocity facilitate cooperation and are relevant to virtually all human interactions. They are typically studied using trust games: one subject gives (entrusts) money to another subject, which may return some of the proceeds (reciprocate). Currently, however, it is unclear whether trust and reciprocity in monetary transactions are similar in other settings, such as physical effort. Trust and reciprocity of physical effort are important as many everyday decisions imply an exchange of physical effort, and such exchange is central to labor relations. Here we studied a trust game based on physical effort and compared the results with those of a computationally equivalent monetary trust game. We found no significant difference between effort and money conditions in both the amount trusted and the quantity reciprocated. Moreover, there is a high positive correlation in subjects' behavior across conditions. This suggests that trust and reciprocity may be character traits: subjects that are trustful/trustworthy in monetary settings behave similarly during exchanges of physical effort. Our results validate the use of trust games to study exchanges in physical effort and to characterize inter-subject differences in trust and reciprocity, and also suggest a new behavioral paradigm to study these differences. PMID:21364931

  5. In silico discovery of metal-organic frameworks for precombustion CO2 capture using a genetic algorithm

    PubMed Central

    Chung, Yongchul G.; Gómez-Gualdrón, Diego A.; Li, Peng; Leperi, Karson T.; Deria, Pravas; Zhang, Hongda; Vermeulen, Nicolaas A.; Stoddart, J. Fraser; You, Fengqi; Hupp, Joseph T.; Farha, Omar K.; Snurr, Randall Q.

    2016-01-01

    Discovery of new adsorbent materials with a high CO2 working capacity could help reduce CO2 emissions from newly commissioned power plants using precombustion carbon capture. High-throughput computational screening efforts can accelerate the discovery of new adsorbents but sometimes require significant computational resources to explore the large space of possible materials. We report the in silico discovery of high-performing adsorbents for precombustion CO2 capture by applying a genetic algorithm to efficiently search a large database of metal-organic frameworks (MOFs) for top candidates. High-performing MOFs identified from the in silico search were synthesized and activated and show a high CO2 working capacity and a high CO2/H2 selectivity. One of the synthesized MOFs shows a higher CO2 working capacity than any MOF reported in the literature under the operating conditions investigated here. PMID:27757420

  6. Fulfilling the promise of the materials genome initiative with high-throughput experimental methodologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.

    The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less

  7. Fulfilling the promise of the materials genome initiative with high-throughput experimental methodologies

    DOE PAGES

    Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.; ...

    2017-03-28

    The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less

  8. Duct flow nonuniformities study for space shuttle main engine

    NASA Technical Reports Server (NTRS)

    Thoenes, J.

    1985-01-01

    To improve the Space Shuttle Main Engine (SSME) design and for future use in the development of generation rocket engines, a combined experimental/analytical study was undertaken with the goals of first, establishing an experimental data base for the flow conditions in the SSME high pressure fuel turbopump (HPFTP) hot gas manifold (HGM) and, second, setting up a computer model of the SSME HGM flow field. Using the test data to verify the computer model it should be possible in the future to computationally scan contemplated advanced design configurations and limit costly testing to the most promising design. The effort of establishing and using the computer model is detailed. The comparison of computational results and experimental data observed clearly demonstrate that computational fluid mechanics (CFD) techniques can be used successfully to predict the gross features of three dimensional fluid flow through configurations as intricate as the SSME turbopump hot gas manifold.

  9. New project to support scientific collaboration electronically

    NASA Astrophysics Data System (ADS)

    Clauer, C. R.; Rasmussen, C. E.; Niciejewski, R. J.; Killeen, T. L.; Kelly, J. D.; Zambre, Y.; Rosenberg, T. J.; Stauning, P.; Friis-Christensen, E.; Mende, S. B.; Weymouth, T. E.; Prakash, A.; McDaniel, S. E.; Olson, G. M.; Finholt, T. A.; Atkins, D. E.

    A new multidisciplinary effort is linking research in the upper atmospheric and space, computer, and behavioral sciences to develop a prototype electronic environment for conducting team science worldwide. A real-world electronic collaboration testbed has been established to support scientific work centered around the experimental operations being conducted with instruments from the Sondrestrom Upper Atmospheric Research Facility in Kangerlussuaq, Greenland. Such group computing environments will become an important component of the National Information Infrastructure initiative, which is envisioned as the high-performance communications infrastructure to support national scientific research.

  10. Hafnium-Based Contrast Agents for X-ray Computed Tomography.

    PubMed

    Berger, Markus; Bauser, Marcus; Frenzel, Thomas; Hilger, Christoph Stephan; Jost, Gregor; Lauria, Silvia; Morgenstern, Bernd; Neis, Christian; Pietsch, Hubertus; Sülzle, Detlev; Hegetschweiler, Kaspar

    2017-05-15

    Heavy-metal-based contrast agents (CAs) offer enhanced X-ray absorption for X-ray computed tomography (CT) compared to the currently used iodinated CAs. We report the discovery of new lanthanide and hafnium azainositol complexes and their optimization with respect to high water solubility and stability. Our efforts culminated in the synthesis of BAY-576, an uncharged hafnium complex with 3:2 stoichiometry and broken complex symmetry. The superior properties of this asymmetrically substituted hafnium CA were demonstrated by a CT angiography study in rabbits that revealed excellent signal contrast enhancement.

  11. Merging the Machines of Modern Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Laura; Collins, Jim

    Two recent projects have harnessed supercomputing resources at the US Department of Energy’s Argonne National Laboratory in a novel way to support major fusion science and particle collider experiments. Using leadership computing resources, one team ran fine-grid analysis of real-time data to make near-real-time adjustments to an ongoing experiment, while a second team is working to integrate Argonne’s supercomputers into the Large Hadron Collider/ATLAS workflow. Together these efforts represent a new paradigm of the high-performance computing center as a partner in experimental science.

  12. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-04-15

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less

  13. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-04-01

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  14. Space Transportation Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Stewart, Mark E.; Suresh, Ambady; Owen, A. Karl

    2001-01-01

    This report outlines the Space Transportation Propulsion Systems for the NPSS (Numerical Propulsion System Simulation) program. Topics include: 1) a review of Engine/Inlet Coupling Work; 2) Background/Organization of Space Transportation Initiative; 3) Synergy between High Performance Computing and Communications Program (HPCCP) and Advanced Space Transportation Program (ASTP); 4) Status of Space Transportation Effort, including planned deliverables for FY01-FY06, FY00 accomplishments (HPCCP Funded) and FY01 Major Milestones (HPCCP and ASTP); and 5) a review current technical efforts, including a review of the Rocket-Based Combined-Cycle (RBCC), Scope of Work, RBCC Concept Aerodynamic Analysis and RBCC Concept Multidisciplinary Analysis.

  15. Progress on the FabrIc for Frontier Experiments project at Fermilab

    DOE PAGES

    Box, Dennis; Boyd, Joseph; Dykstra, Dave; ...

    2015-12-23

    The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercialmore » cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. Hence, the progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide« less

  16. The FIFE Project at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Box, D.; Boyd, J.; Di Benedetto, V.

    2016-01-01

    The FabrIc for Frontier Experiments (FIFE) project is an initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC Fermilab experiments across multiple physics areas. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying size, needs, and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of solutions for high throughput computing, data management, database access and collaboration management within an experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid compute sites alongmore » with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including a common job submission service, software and reference data distribution through CVMFS repositories, flexible and robust data transfer clients, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken the leading role in defining the computing model for Fermilab experiments, aided in the design of experiments beyond those hosted at Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.« less

  17. Toward more environmentally resistant gas turbines: Progress in NASA-Lewis programs

    NASA Technical Reports Server (NTRS)

    Lowell, C. E.; Grisaffe, S. J.; Levine, S. R.

    1976-01-01

    A wide range of programs are being conducted for improving the environmental resistance to oxidation and hot corrosion of gas turbine and power system materials. They range from fundamental efforts to delineate attack mechanisms, allow attack modeling and permit life prediction, to more applied efforts to develop potentially more resistant alloys and coatings. Oxidation life prediction efforts have resulted in a computer program which provides an initial method for predicting long time metal loss using short time oxidation data by means of a paralinear attack model. Efforts in alloy development have centered on oxide-dispersion strengthened alloys based on the Ni-Cr-Al system. Compositions have been identified which are compromises between oxidation and thermal fatigue resistance. Fundamental studies of hot corrosion mechanisms include thermodynamic studies of sodium sulfate formation during turbine combustion. Information concerning species formed during the vaporization of Na2SO4 has been developed using high temperature mass spectrometry.

  18. Aeroelasticity of wing and wing-body configurations on parallel computers

    NASA Technical Reports Server (NTRS)

    Byun, Chansup

    1995-01-01

    The objective of this research is to develop computationally efficient methods for solving aeroelasticity problems on parallel computers. Both uncoupled and coupled methods are studied in this research. For the uncoupled approach, the conventional U-g method is used to determine the flutter boundary. The generalized aerodynamic forces required are obtained by the pulse transfer-function analysis method. For the coupled approach, the fluid-structure interaction is obtained by directly coupling finite difference Euler/Navier-Stokes equations for fluids and finite element dynamics equations for structures. This capability will significantly impact many aerospace projects of national importance such as Advanced Subsonic Civil Transport (ASCT), where the structural stability margin becomes very critical at the transonic region. This research effort will have direct impact on the High Performance Computing and Communication (HPCC) Program of NASA in the area of parallel computing.

  19. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  20. New Mexico district work-effort analysis computer program

    USGS Publications Warehouse

    Hiss, W.L.; Trantolo, A.P.; Sparks, J.L.

    1972-01-01

    The computer program (CAN 2) described in this report is one of several related programs used in the New Mexico District cost-analysis system. The work-effort information used in these programs is accumulated and entered to the nearest hour on forms completed by each employee. Tabulating cards are punched directly from these forms after visual examinations for errors are made. Reports containing detailed work-effort data itemized by employee within each project and account and by account and project for each employee are prepared for both current-month and year-to-date periods by the CAN 2 computer program. An option allowing preparation of reports for a specified 3-month period is provided. The total number of hours worked on each account and project and a grand total of hours worked in the New Mexico District is computed and presented in a summary report for each period. Work effort not chargeable directly to individual projects or accounts is considered as overhead and can be apportioned to the individual accounts and projects on the basis of the ratio of the total hours of work effort for the individual accounts or projects to the total New Mexico District work effort at the option of the user. The hours of work performed by a particular section, such as General Investigations or Surface Water, are prorated and charged to the projects or accounts within the particular section. A number of surveillance or buffer accounts are employed to account for the hours worked on special events or on those parts of large projects or accounts that require a more detailed analysis. Any part of the New Mexico District operation can be separated and analyzed in detail by establishing an appropriate buffer account. With the exception of statements associated with word size, the computer program is written in FORTRAN IV in a relatively low and standard language level to facilitate its use on different digital computers. The program has been run only on a Control Data Corporation 6600 computer system. Central processing computer time has seldom exceeded 5 minutes on the longest year-to-date runs.

  1. High resolution flow field prediction for tail rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.

    1989-01-01

    The prediction of tail rotor noise due to the impingement of the main rotor wake poses a significant challenge to current analysis methods in rotorcraft aeroacoustics. This paper describes the development of a new treatment of the tail rotor aerodynamic environment that permits highly accurate resolution of the incident flow field with modest computational effort relative to alternative models. The new approach incorporates an advanced full-span free wake model of the main rotor in a scheme which reconstructs high-resolution flow solutions from preliminary, computationally inexpensive simulations with coarse resolution. The heart of the approach is a novel method for using local velocity correction terms to capture the steep velocity gradients characteristic of the vortex-dominated incident flow. Sample calculations have been undertaken to examine the principal types of interactions between the tail rotor and the main rotor wake and to examine the performance of the new method. The results of these sample problems confirm the success of this approach in capturing the high-resolution flows necessary for analysis of rotor-wake/rotor interactions with dramatically reduced computational cost. Computations of radiated sound are also carried out that explore the role of various portions of the main rotor wake in generating tail rotor noise.

  2. Message Passing vs. Shared Address Space on a Cluster of SMPs

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswas, Rupak

    2000-01-01

    The convergence of scalable computer architectures using clusters of PCs (or PC-SMPs) with commodity networking has become an attractive platform for high end scientific computing. Currently, message-passing and shared address space (SAS) are the two leading programming paradigms for these systems. Message-passing has been standardized with MPI, and is the most common and mature programming approach. However message-passing code development can be extremely difficult, especially for irregular structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality, and high protocol overhead. In this paper, we compare the performance of and programming effort, required for six applications under both programming models on a 32 CPU PC-SMP cluster. Our application suite consists of codes that typically do not exhibit high efficiency under shared memory programming. due to their high communication to computation ratios and complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications: however, on certain classes of problems SAS performance is competitive with MPI. We also present new algorithms for improving the PC cluster performance of MPI collective operations.

  3. Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2

    NASA Technical Reports Server (NTRS)

    Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)

    1998-01-01

    The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.

  4. High-performance computing with quantum processing units

    DOE PAGES

    Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.; ...

    2017-03-01

    The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less

  5. A Discussion of Using a Reconfigurable Processor to Implement the Discrete Fourier Transform

    NASA Technical Reports Server (NTRS)

    White, Michael J.

    2004-01-01

    This paper presents the design and implementation of the Discrete Fourier Transform (DFT) algorithm on a reconfigurable processor system. While highly applicable to many engineering problems, the DFT is an extremely computationally intensive algorithm. Consequently, the eventual goal of this work is to enhance the execution of a floating-point precision DFT algorithm by off loading the algorithm from the computing system. This computing system, within the context of this research, is a typical high performance desktop computer with an may of field programmable gate arrays (FPGAs). FPGAs are hardware devices that are configured by software to execute an algorithm. If it is desired to change the algorithm, the software is changed to reflect the modification, then download to the FPGA, which is then itself modified. This paper will discuss methodology for developing the DFT algorithm to be implemented on the FPGA. We will discuss the algorithm, the FPGA code effort, and the results to date.

  6. High-performance computing with quantum processing units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Britt, Keith A.; Oak Ridge National Lab.; Humble, Travis S.

    The prospects of quantum computing have driven efforts to realize fully functional quantum processing units (QPUs). Recent success in developing proof-of-principle QPUs has prompted the question of how to integrate these emerging processors into modern high-performance computing (HPC) systems. We examine how QPUs can be integrated into current and future HPC system architectures by accounting for func- tional and physical design requirements. We identify two integration pathways that are differentiated by infrastructure constraints on the QPU and the use cases expected for the HPC system. This includes a tight integration that assumes infrastructure bottlenecks can be overcome as well asmore » a loose integration that as- sumes they cannot. We find that the performance of both approaches is likely to depend on the quantum interconnect that serves to entangle multiple QPUs. As a result, we also identify several challenges in assessing QPU performance for HPC, and we consider new metrics that capture the interplay between system architecture and the quantum parallelism underlying computational performance.« less

  7. Why are women underrepresented in Computer Science? Gender differences in stereotypes, self-efficacy, values, and interests and predictors of future CS course-taking and grades

    NASA Astrophysics Data System (ADS)

    Beyer, Sylvia

    2014-07-01

    This study addresses why women are underrepresented in Computer Science (CS). Data from 1319 American first-year college students (872 female and 447 male) indicate that gender differences in computer self-efficacy, stereotypes, interests, values, interpersonal orientation, and personality exist. If students had had a positive experience in their first CS course, they had a stronger intention to take another CS course. A subset of 128 students (68 females and 60 males) took a CS course up to one year later. Students who were interested in CS, had high computer self-efficacy, were low in family orientation, low in conscientiousness, and low in openness to experiences were more likely to take CS courses. Furthermore, individuals who were highly conscientious and low in relational-interdependent self-construal earned the highest CS grades. Efforts to improve women's representation in CS should bear these results in mind.

  8. High School Students in the New Learning Environment: A Profile of Distance E-Learners

    ERIC Educational Resources Information Center

    Kirby, Dale; Sharpe, Dennis

    2010-01-01

    The relative ubiquity of computer access and the rapid development of information and communication technology have profoundly impacted teaching and learning at a distance. Relatively little is currently known about the characteristics of those students who participate in distance e-learning courses at the secondary school level. In an effort to…

  9. Exploring First-Term Online College Dropout Relative to High School Certification, Gap Years, and Computer Literacy

    ERIC Educational Resources Information Center

    Suydan, Ava Birgitte

    2014-01-01

    Despite university efforts to decrease the number of students dropping out of college, attrition of online students occurs at an annual rate of 50% or more (Wang & Wu, 2004). Educational leaders understand the increased demand for online programs and courses because of students' requirements of convenience and flexibility (Kuo, Walker,…

  10. Numerical Simulation of a High Mach Number Jet Flow

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.

    1993-01-01

    The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach numbers of 1.5 and 2.1 are considered. Reynolds number in the simulations was about a million. Our numerical model is based on the 2-4 scheme by Gottlieb & Turkel. Bayliss et al. applied the 2-4 scheme in boundary layer computations. This scheme was also used by Ragab and Sheen to study the nonlinear development of supersonic instability waves in a mixing layer. In this study, we present two dimensional direct simulation results for both plane and axisymmetric jets. These results are compared with linear theory predictions. These computations were made for near nozzle exit region and velocity in spanwise/azimuthal direction was assumed to be zero.

  11. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  12. Computational and Physical Analysis of Catalytic Compounds

    NASA Astrophysics Data System (ADS)

    Wu, Richard; Sohn, Jung Jae; Kyung, Richard

    2015-03-01

    Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.

  13. A direct-to-drive neural data acquisition system.

    PubMed

    Kinney, Justin P; Bernstein, Jacob G; Meyer, Andrew J; Barber, Jessica B; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T; Kopell, Nancy J; Boyden, Edward S

    2015-01-01

    Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future.

  14. A direct-to-drive neural data acquisition system

    PubMed Central

    Kinney, Justin P.; Bernstein, Jacob G.; Meyer, Andrew J.; Barber, Jessica B.; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T.; Kopell, Nancy J.; Boyden, Edward S.

    2015-01-01

    Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future. PMID:26388740

  15. Overview of Risk Mitigation for Safety-Critical Computer-Based Systems

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report presents a high-level overview of a general strategy to mitigate the risks from threats to safety-critical computer-based systems. In this context, a safety threat is a process or phenomenon that can cause operational safety hazards in the form of computational system failures. This report is intended to provide insight into the safety-risk mitigation problem and the characteristics of potential solutions. The limitations of the general risk mitigation strategy are discussed and some options to overcome these limitations are provided. This work is part of an ongoing effort to enable well-founded assurance of safety-related properties of complex safety-critical computer-based aircraft systems by developing an effective capability to model and reason about the safety implications of system requirements and design.

  16. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.

  17. From the desktop to the grid: scalable bioinformatics via workflow conversion.

    PubMed

    de la Garza, Luis; Veit, Johannes; Szolek, Andras; Röttig, Marc; Aiche, Stephan; Gesing, Sandra; Reinert, Knut; Kohlbacher, Oliver

    2016-03-12

    Reproducibility is one of the tenets of the scientific method. Scientific experiments often comprise complex data flows, selection of adequate parameters, and analysis and visualization of intermediate and end results. Breaking down the complexity of such experiments into the joint collaboration of small, repeatable, well defined tasks, each with well defined inputs, parameters, and outputs, offers the immediate benefit of identifying bottlenecks, pinpoint sections which could benefit from parallelization, among others. Workflows rest upon the notion of splitting complex work into the joint effort of several manageable tasks. There are several engines that give users the ability to design and execute workflows. Each engine was created to address certain problems of a specific community, therefore each one has its advantages and shortcomings. Furthermore, not all features of all workflow engines are royalty-free -an aspect that could potentially drive away members of the scientific community. We have developed a set of tools that enables the scientific community to benefit from workflow interoperability. We developed a platform-free structured representation of parameters, inputs, outputs of command-line tools in so-called Common Tool Descriptor documents. We have also overcome the shortcomings and combined the features of two royalty-free workflow engines with a substantial user community: the Konstanz Information Miner, an engine which we see as a formidable workflow editor, and the Grid and User Support Environment, a web-based framework able to interact with several high-performance computing resources. We have thus created a free and highly accessible way to design workflows on a desktop computer and execute them on high-performance computing resources. Our work will not only reduce time spent on designing scientific workflows, but also make executing workflows on remote high-performance computing resources more accessible to technically inexperienced users. We strongly believe that our efforts not only decrease the turnaround time to obtain scientific results but also have a positive impact on reproducibility, thus elevating the quality of obtained scientific results.

  18. A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.

    2004-01-01

    Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.

  19. A compendium of computational fluid dynamics at the Langley Research Center

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Through numerous summary examples, the scope and general nature of the computational fluid dynamics (CFD) effort at Langley is identified. These summaries will help inform researchers in CFD and line management at Langley of the overall effort. In addition to the inhouse efforts, out of house CFD work supported by Langley through industrial contracts and university grants are included. Researchers were encouraged to include summaries of work in preliminary and tentative states of development as well as current research approaching definitive results.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    East, D. R.; Sexton, J.

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and IBM TJ Watson Research Center to research, assess feasibility and develop an implementation plan for a High Performance Computing Innovation Center (HPCIC) in the Livermore Valley Open Campus (LVOC). The ultimate goal of this work was to help advance the State of California and U.S. commercial competitiveness in the arena of High Performance Computing (HPC) by accelerating the adoption of computational science solutions, consistent with recent DOE strategy directives. The desired result of this CRADA was a well-researched,more » carefully analyzed market evaluation that would identify those firms in core sectors of the US economy seeking to adopt or expand their use of HPC to become more competitive globally, and to define how those firms could be helped by the HPCIC with IBM as an integral partner.« less

  1. Detonation Product EOS Studies: Using ISLS to Refine Cheetah

    NASA Astrophysics Data System (ADS)

    Zaug, J. M.; Howard, W. M.; Fried, L. E.; Hansen, D. W.

    2002-07-01

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.

  2. Turbine Blade and Endwall Heat Transfer Measured in NASA Glenn's Transonic Turbine Blade Cascade

    NASA Technical Reports Server (NTRS)

    Giel, Paul W.

    2000-01-01

    Higher operating temperatures increase the efficiency of aircraft gas turbine engines, but can also degrade internal components. High-pressure turbine blades just downstream of the combustor are particularly susceptible to overheating. Computational fluid dynamics (CFD) computer programs can predict the flow around the blades so that potential hot spots can be identified and appropriate cooling schemes can be designed. Various blade and cooling schemes can be examined computationally before any hardware is built, thus saving time and effort. Often though, the accuracy of these programs has been found to be inadequate for predicting heat transfer. Code and model developers need highly detailed aerodynamic and heat transfer data to validate and improve their analyses. The Transonic Turbine Blade Cascade was built at the NASA Glenn Research Center at Lewis Field to help satisfy the need for this type of data.

  3. Multidisciplinary Design Technology Development: A Comparative Investigation of Integrated Aerospace Vehicle Design Tools

    NASA Technical Reports Server (NTRS)

    Renaud, John E.; Batill, Stephen M.; Brockman, Jay B.

    1998-01-01

    This research effort is a joint program between the Departments of Aerospace and Mechanical Engineering and the Computer Science and Engineering Department at the University of Notre Dame. Three Principal Investigators; Drs. Renaud, Brockman and Batill directed this effort. During the four and a half year grant period, six Aerospace and Mechanical Engineering Ph.D. students and one Masters student received full or partial support, while four Computer Science and Engineering Ph.D. students and one Masters student were supported. During each of the summers up to four undergraduate students were involved in related research activities. The purpose of the project was to develop a framework and systematic methodology to facilitate the application of Multidisciplinary Design Optimization (N4DO) to a diverse class of system design problems. For all practical aerospace systems, the design of a systems is a complex sequence of events which integrates the activities of a variety of discipline "experts" and their associated "tools". The development, archiving and exchange of information between these individual experts is central to the design task and it is this information which provides the basis for these experts to make coordinated design decisions (i.e., compromises and trade-offs) - resulting in the final product design. Grant efforts focused on developing and evaluating frameworks for effective design coordination within a MDO environment. Central to these research efforts was the concept that the individual discipline "expert", using the most appropriate "tools" available and the most complete description of the system should be empowered to have the greatest impact on the design decisions and final design. This means that the overall process must be highly interactive and efficiently conducted if the resulting design is to be developed in a manner consistent with cost and time requirements. The methods developed as part of this research effort include; extensions to a sensitivity based Concurrent Subspace Optimization (CSSO) MDO algorithm; the development of a neural network response surface based CSSO-MDO algorithm; and the integration of distributed computing and process scheduling into the MDO environment. This report overviews research efforts in each of these focus. A complete bibliography of research produced with support of this grant is attached.

  4. Motivational Beliefs, Student Effort, and Feedback Behaviour in Computer-Based Formative Assessment

    ERIC Educational Resources Information Center

    Timmers, Caroline F.; Braber-van den Broek, Jannie; van den Berg, Stephanie M.

    2013-01-01

    Feedback can only be effective when students seek feedback and process it. This study examines the relations between students' motivational beliefs, effort invested in a computer-based formative assessment, and feedback behaviour. Feedback behaviour is represented by whether a student seeks feedback and the time a student spends studying the…

  5. Establishing a K-12 Circuit Design Program

    ERIC Educational Resources Information Center

    Inceoglu, Mustafa M.

    2010-01-01

    Outreach, as defined by Wikipedia, is an effort by an organization or group to connect its ideas or practices to the efforts of other organizations, groups, specific audiences, or the general public. This paper describes a computer engineering outreach project of the Department of Computer Engineering at Ege University, Izmir, Turkey, to a local…

  6. Identifying Predictors of Achievement in the Newly Defined Information Literacy: A Neural Network Analysis

    ERIC Educational Resources Information Center

    Sexton, Randall; Hignite, Michael; Margavio, Thomas M.; Margavio, Geanie W.

    2009-01-01

    Information Literacy is a concept that evolved as a result of efforts to move technology-based instructional and research efforts beyond the concepts previously associated with "computer literacy." While computer literacy was largely a topic devoted to knowledge of hardware and software, information literacy is concerned with students' abilities…

  7. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  8. SCEC Earthquake System Science Using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes were run on NSF TeraGrid sites including simulations that use the full PSC Big Ben supercomputer (4096 cores) and simulations that ran on more than 10K cores at TACC Ranger. The SCEC/CME group used scientific workflow tools and grid-computing to run more than 1.5 million jobs at NCSA for the CyberShake project. Visualizations produced by a SCEC/CME researcher of the 10Hz ShakeOut 1.2 scenario simulation data were used by USGS in ShakeOut publications and public outreach efforts. OpenSHA was ported onto an NSF supercomputer and was used to produce very high resolution hazard PSHA maps that contained more than 1.6 million hazard curves.

  9. Adaptive allocation of decisionmaking responsibility between human and computer in multitask situations

    NASA Technical Reports Server (NTRS)

    Chu, Y.-Y.; Rouse, W. B.

    1979-01-01

    As human and computer come to have overlapping decisionmaking abilities, a dynamic or adaptive allocation of responsibilities may be the best mode of human-computer interaction. It is suggested that the computer serve as a backup decisionmaker, accepting responsibility when human workload becomes excessive and relinquishing responsibility when workload becomes acceptable. A queueing theory formulation of multitask decisionmaking is used and a threshold policy for turning the computer on/off is proposed. This policy minimizes event-waiting cost subject to human workload constraints. An experiment was conducted with a balanced design of several subject runs within a computer-aided multitask flight management situation with different task demand levels. It was found that computer aiding enhanced subsystem performance as well as subjective ratings. The queueing model appears to be an adequate representation of the multitask decisionmaking situation, and to be capable of predicting system performance in terms of average waiting time and server occupancy. Server occupancy was further found to correlate highly with the subjective effort ratings.

  10. Analysis of Flowfields over Four-Engine DC-X Rockets

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Cornelison, Joni

    1996-01-01

    The objective of this study is to validate a computational methodology for the aerodynamic performance of an advanced conical launch vehicle configuration. The computational methodology is based on a three-dimensional, viscous flow, pressure-based computational fluid dynamics formulation. Both wind-tunnel and ascent flight-test data are used for validation. Emphasis is placed on multiple-engine power-on effects. Computational characterization of the base drag in the critical subsonic regime is the focus of the validation effort; until recently, almost no multiple-engine data existed for a conical launch vehicle configuration. Parametric studies using high-order difference schemes are performed for the cold-flow tests, whereas grid studies are conducted for the flight tests. The computed vehicle axial force coefficients, forebody, aftbody, and base surface pressures compare favorably with those of tests. The results demonstrate that with adequate grid density and proper distribution, a high-order difference scheme, finite rate afterburning kinetics to model the plume chemistry, and a suitable turbulence model to describe separated flows, plume/air mixing, and boundary layers, computational fluid dynamics is a tool that can be used to predict the low-speed aerodynamic performance for rocket design and operations.

  11. Analysis of Special Forces Medic (18D) Attrition

    DTIC Science & Technology

    1994-08-01

    130 students per year, 18D should reach 100% strength in the second quarter of FY95. Training Issues The biggest single reason for the high attrition...including a library and 24-hour study rooms, ix SOMED and SOMTC should consider incorporating computer-based training into the 18D training. The PA...particularly in the sciences. Given this, the high SOMED attrition rate is to be expected. Some have suggested that a greater effort should be made to recruit

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heffelfinger, Grant S.; Martino, Anthony; Rintoul, Mark Daniel

    This SAND report provides the technical progress through October 2004 of the Sandia-led project, %22Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling,%22 funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganismsmore » play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these - 4 - pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project, including a copy of the original proposal, can be found at www.genomes-to-life.org Acknowledgment We want to gratefully acknowledge the contributions of the GTL Project Team as follows: Grant S. Heffelfinger1*, Anthony Martino2, Andrey Gorin3, Ying Xu10,3, Mark D. Rintoul1, Al Geist3, Matthew Ennis1, Hashimi Al-Hashimi8, Nikita Arnold3, Andrei Borziak3, Bianca Brahamsha6, Andrea Belgrano12, Praveen Chandramohan3, Xin Chen9, Pan Chongle3, Paul Crozier1, PguongAn Dam10, George S. Davidson1, Robert Day3, Jean Loup Faulon2, Damian Gessler12, Arlene Gonzalez2, David Haaland1, William Hart1, Victor Havin3, Tao Jiang9, Howland Jones1, David Jung3, Ramya Krishnamurthy3, Yooli Light2, Shawn Martin1, Rajesh Munavalli3, Vijaya Natarajan3, Victor Olman10, Frank Olken4, Brian Palenik6, Byung Park3, Steven Plimpton1, Diana Roe2, Nagiza Samatova3, Arie Shoshani4, Michael Sinclair1, Alex Slepoy1, Shawn Stevens8, Chris Stork1, Charlie Strauss5, Zhengchang Su10, Edward Thomas1, Jerilyn A. Timlin1, Xiufeng Wan11, HongWei Wu10, Dong Xu11, Gong-Xin Yu3, Grover Yip8, Zhaoduo Zhang2, Erik Zuiderweg8 *Author to whom correspondence should be addressed (gsheffe%40sandia.gov) 1. Sandia National Laboratories, Albuquerque, NM 2. Sandia National Laboratories, Livermore, CA 3. Oak Ridge National Laboratory, Oak Ridge, TN 4. Lawrence Berkeley National Laboratory, Berkeley, CA 5. Los Alamos National Laboratory, Los Alamos, NM 6. University of California, San Diego 7. University of Illinois, Urbana/Champaign 8. University of Michigan, Ann Arbor 9. University of California, Riverside 10. University of Georgia, Athens 11. University of Missouri, Columbia 12. National Center for Genome Resources, Santa Fe, NM Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.« less

  13. Genomes to Life Project Quarterly Report April 2005.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heffelfinger, Grant S.; Martino, Anthony; Rintoul, Mark Daniel

    2006-02-01

    This SAND report provides the technical progress through April 2005 of the Sandia-led project, "Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling," funded by the DOE Office of Science Genomics:GTL Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganisms play amore » key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these - 4 -pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project can be found at www.genomes-to-life.org Acknowledgment We want to gratefully acknowledge the contributions of: Grant Heffelfinger1*, Anthony Martino2, Brian Palenik6, Andrey Gorin3, Ying Xu10,3, Mark Daniel Rintoul1, Al Geist3, Matthew Ennis1, with Pratul Agrawal3, Hashim Al-Hashimi8, Andrea Belgrano12, Mike Brown1, Xin Chen9, Paul Crozier1, PguongAn Dam10, Jean-Loup Faulon2, Damian Gessler12, David Haaland1, Victor Havin4, C.F. Huang5, Tao Jiang9, Howland Jones1, David Jung3, Katherine Kang14, Michael Langston15, Shawn Martin1, Shawn Means1, Vijaya Natarajan4, Roy Nielson5, Frank Olken4, Victor Olman10, Ian Paulsen14, Steve Plimpton1, Andreas Reichsteiner5, Nagiza Samatova3, Arie Shoshani4, Michael Sinclair1, Alex Slepoy1, Shawn Stevens8, Charlie Strauss5, Zhengchang Su10, Ed Thomas1, Jerilyn Timlin1, WimVermaas13, Xiufeng Wan11, HongWei Wu10, Dong Xu11, Grover Yip8, Erik Zuiderweg8 *Author to whom correspondence should be addressed (gsheffe@sandia.gov) 1. Sandia National Laboratories, Albuquerque, NM 2. Sandia National Laboratories, Livermore, CA 3. Oak Ridge National Laboratory, Oak Ridge, TN 4. Lawrence Berkeley National Laboratory, Berkeley, CA 5. Los Alamos National Laboratory, Los Alamos, NM 6. University of California, San Diego 7. University of Illinois, Urbana/Champaign 8. University of Michigan, Ann Arbor 9. University of California, Riverside 10. University of Georgia, Athens 11. University of Missouri, Columbia 12. National Center for Genome Resources, Santa Fe, NM 13. Arizona State University 14. The Institute for Genomic Research 15. University of Tennessee 5 Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL8500.« less

  14. Navier-Stokes analysis of cold scramjet-afterbody flows

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Engelund, Walter C.; Eleshaky, Mohamed E.

    1989-01-01

    The progress of two efforts in coding solutions of Navier-Stokes equations is summarized. The first effort concerns a 3-D space marching parabolized Navier-Stokes (PNS) code being modified to compute the supersonic mixing flow through an internal/external expansion nozzle with multicomponent gases. The 3-D PNS equations, coupled with a set of species continuity equations, are solved using an implicit finite difference scheme. The completed work is summarized and includes code modifications for four chemical species, computing the flow upstream of the upper cowl for a theoretical air mixture, developing an initial plane solution for the inner nozzle region, and computing the flow inside the nozzle for both a N2/O2 mixture and a Freon-12/Ar mixture, and plotting density-pressure contours for the inner nozzle region. The second effort concerns a full Navier-Stokes code. The species continuity equations account for the diffusion of multiple gases. This 3-D explicit afterbody code has the ability to use high order numerical integration schemes such as the 4th order MacCormack, and the Gottlieb-MacCormack schemes. Changes to the work are listed and include, but are not limited to: (1) internal/external flow capability; (2) new treatments of the cowl wall boundary conditions and relaxed computations around the cowl region and cowl tip; (3) the entering of the thermodynamic and transport properties of Freon-12, Ar, O, and N; (4) modification to the Baldwin-Lomax turbulence model to account for turbulent eddies generated by cowl walls inside and external to the nozzle; and (5) adopting a relaxation formula to account for the turbulence in the mixing shear layer.

  15. More efficient optimization of long-term water supply portfolios

    NASA Astrophysics Data System (ADS)

    Kirsch, Brian R.; Characklis, Gregory W.; Dillard, Karen E. M.; Kelley, C. T.

    2009-03-01

    The use of temporary transfers, such as options and leases, has grown as utilities attempt to meet increases in demand while reducing dependence on the expansion of costly infrastructure capacity (e.g., reservoirs). Earlier work has been done to construct optimal portfolios comprising firm capacity and transfers, using decision rules that determine the timing and volume of transfers. However, such work has only focused on the short-term (e.g., 1-year scenarios), which limits the utility of these planning efforts. Developing multiyear portfolios can lead to the exploration of a wider range of alternatives but also increases the computational burden. This work utilizes a coupled hydrologic-economic model to simulate the long-term performance of a city's water supply portfolio. This stochastic model is linked with an optimization search algorithm that is designed to handle the high-frequency, low-amplitude noise inherent in many simulations, particularly those involving expected values. This noise is detrimental to the accuracy and precision of the optimized solution and has traditionally been controlled by investing greater computational effort in the simulation. However, the increased computational effort can be substantial. This work describes the integration of a variance reduction technique (control variate method) within the simulation/optimization as a means of more efficiently identifying minimum cost portfolios. Random variation in model output (i.e., noise) is moderated using knowledge of random variations in stochastic input variables (e.g., reservoir inflows, demand), thereby reducing the computing time by 50% or more. Using these efficiency gains, water supply portfolios are evaluated over a 10-year period in order to assess their ability to reduce costs and adapt to demand growth, while still meeting reliability goals. As a part of the evaluation, several multiyear option contract structures are explored and compared.

  16. Rating of Dynamic Coefficient for Simple Beam Bridge Design on High-Speed Railways

    NASA Astrophysics Data System (ADS)

    Diachenko, Leonid; Benin, Andrey; Smirnov, Vladimir; Diachenko, Anastasia

    2018-06-01

    The aim of the work is to improve the methodology for the dynamic computation of simple beam spans during the impact of high-speed trains. Mathematical simulation utilizing numerical and analytical methods of structural mechanics is used in the research. The article analyses parameters of the effect of high-speed trains on simple beam spanning bridge structures and suggests a technique of determining of the dynamic index to the live load. Reliability of the proposed methodology is confirmed by results of numerical simulation of high-speed train passage over spans with different speeds. The proposed algorithm of dynamic computation is based on a connection between maximum acceleration of the span in the resonance mode of vibrations and the main factors of stress-strain state. The methodology allows determining maximum and also minimum values of the main efforts in the construction that makes possible to perform endurance tests. It is noted that dynamic additions for the components of the stress-strain state (bending moments, transverse force and vertical deflections) are different. This condition determines the necessity for differentiated approach to evaluation of dynamic coefficients performing design verification of I and II groups of limiting state. The practical importance: the methodology of determining the dynamic coefficients allows making dynamic calculation and determining the main efforts in split beam spans without numerical simulation and direct dynamic analysis that significantly reduces the labour costs for design.

  17. An Automated DAKOTA and VULCAN-CFD Framework with Application to Supersonic Facility Nozzle Flowpath Optimization

    NASA Technical Reports Server (NTRS)

    Axdahl, Erik L.

    2015-01-01

    Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.

  18. Progress Toward Affordable High Fidelity Combustion Simulations Using Filtered Density Functions for Hypersonic Flows in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Drozda, Tomasz G.; Quinlan, Jesse R.; Pisciuneri, Patrick H.; Yilmaz, S. Levent

    2012-01-01

    Significant progress has been made in the development of subgrid scale (SGS) closures based on a filtered density function (FDF) for large eddy simulations (LES) of turbulent reacting flows. The FDF is the counterpart of the probability density function (PDF) method, which has proven effective in Reynolds averaged simulations (RAS). However, while systematic progress is being made advancing the FDF models for relatively simple flows and lab-scale flames, the application of these methods in complex geometries and high speed, wall-bounded flows with shocks remains a challenge. The key difficulties are the significant computational cost associated with solving the FDF transport equation and numerically stiff finite rate chemistry. For LES/FDF methods to make a more significant impact in practical applications a pragmatic approach must be taken that significantly reduces the computational cost while maintaining high modeling fidelity. An example of one such ongoing effort is at the NASA Langley Research Center, where the first generation FDF models, namely the scalar filtered mass density function (SFMDF) are being implemented into VULCAN, a production-quality RAS and LES solver widely used for design of high speed propulsion flowpaths. This effort leverages internal and external collaborations to reduce the overall computational cost of high fidelity simulations in VULCAN by: implementing high order methods that allow reduction in the total number of computational cells without loss in accuracy; implementing first generation of high fidelity scalar PDF/FDF models applicable to high-speed compressible flows; coupling RAS/PDF and LES/FDF into a hybrid framework to efficiently and accurately model the effects of combustion in the vicinity of the walls; developing efficient Lagrangian particle tracking algorithms to support robust solutions of the FDF equations for high speed flows; and utilizing finite rate chemistry parametrization, such as flamelet models, to reduce the number of transported reactive species and remove numerical stiffness. This paper briefly introduces the SFMDF model (highlighting key benefits and challenges), and discusses particle tracking for flows with shocks, the hybrid coupled RAS/PDF and LES/FDF model, flamelet generated manifolds (FGM) model, and the Irregularly Portioned Lagrangian Monte Carlo Finite Difference (IPLMCFD) methodology for scalable simulation of high-speed reacting compressible flows.

  19. Impact of remote sensing upon the planning, management, and development of water resources

    NASA Technical Reports Server (NTRS)

    Loats, H. L.; Fowler, T. R.; Frech, S. L.

    1974-01-01

    A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.

  20. Slat Noise Simulations: Status and Challenges

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Lockard, David P.; Khorrami, Mehdi R.; Mineck, Raymond E.

    2011-01-01

    Noise radiation from the leading edge slat of a high-lift system is known to be an important component of aircraft noise during approach. NASA's Langley Research Center is engaged in a coordinated series of investigations combining high-fidelity numerical simulations and detailed wind tunnel measurements of a generic, unswept, 3-element, high-lift configuration. The goal of this effort is to provide a validated predictive capability that would enable identification of the dominant noise source mechanisms and, ultimately, help develop physics inspired concepts for reducing the far-field acoustic intensity. This paper provides a brief overview of the current status of the computational effort and describes new findings pertaining to the effects of the angle of attack on the aeroacoustics of the slat cove region. Finally, the interplay of the simulation campaign with the concurrently evolving development of a benchmark dataset for an international workshop on airframe noise is outlined.

  1. An opportunity cost model of subjective effort and task performance

    PubMed Central

    Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus

    2013-01-01

    Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775

  2. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  3. Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort.

    PubMed

    Vassena, Eliana; Holroyd, Clay B; Alexander, William H

    2017-01-01

    In the last two decades the anterior cingulate cortex (ACC) has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.

  4. Mapping Fishing Effort through AIS Data

    PubMed Central

    Natale, Fabrizio; Gibin, Maurizio; Alessandrini, Alfredo; Vespe, Michele; Paulrud, Anton

    2015-01-01

    Several research initiatives have been undertaken to map fishing effort at high spatial resolution using the Vessel Monitoring System (VMS). An alternative to the VMS is represented by the Automatic Identification System (AIS), which in the EU became compulsory in May 2014 for all fishing vessels of length above 15 meters. The aim of this paper is to assess the uptake of the AIS in the EU fishing fleet and the feasibility of producing a map of fishing effort with high spatial and temporal resolution at European scale. After analysing a large AIS dataset for the period January-August 2014 and covering most of the EU waters, we show that AIS was adopted by around 75% of EU fishing vessels above 15 meters of length. Using the Swedish fleet as a case study, we developed a method to identify fishing activity based on the analysis of individual vessels’ speed profiles and produce a high resolution map of fishing effort based on AIS data. The method was validated using detailed logbook data and proved to be sufficiently accurate and computationally efficient to identify fishing grounds and effort in the case of trawlers, which represent the largest portion of the EU fishing fleet above 15 meters of length. Issues still to be addressed before extending the exercise to the entire EU fleet are the assessment of coverage levels of the AIS data for all EU waters and the identification of fishing activity in the case of vessels other than trawlers. PMID:26098430

  5. Mapping Fishing Effort through AIS Data.

    PubMed

    Natale, Fabrizio; Gibin, Maurizio; Alessandrini, Alfredo; Vespe, Michele; Paulrud, Anton

    2015-01-01

    Several research initiatives have been undertaken to map fishing effort at high spatial resolution using the Vessel Monitoring System (VMS). An alternative to the VMS is represented by the Automatic Identification System (AIS), which in the EU became compulsory in May 2014 for all fishing vessels of length above 15 meters. The aim of this paper is to assess the uptake of the AIS in the EU fishing fleet and the feasibility of producing a map of fishing effort with high spatial and temporal resolution at European scale. After analysing a large AIS dataset for the period January-August 2014 and covering most of the EU waters, we show that AIS was adopted by around 75% of EU fishing vessels above 15 meters of length. Using the Swedish fleet as a case study, we developed a method to identify fishing activity based on the analysis of individual vessels' speed profiles and produce a high resolution map of fishing effort based on AIS data. The method was validated using detailed logbook data and proved to be sufficiently accurate and computationally efficient to identify fishing grounds and effort in the case of trawlers, which represent the largest portion of the EU fishing fleet above 15 meters of length. Issues still to be addressed before extending the exercise to the entire EU fleet are the assessment of coverage levels of the AIS data for all EU waters and the identification of fishing activity in the case of vessels other than trawlers.

  6. Validation of High-Fidelity CFD/CAA Framework for Launch Vehicle Acoustic Environment Simulation against Scale Model Test Data

    NASA Technical Reports Server (NTRS)

    Liever, Peter A.; West, Jeffrey S.

    2016-01-01

    A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed for launch vehicle liftoff acoustic environment predictions. The framework couples the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate discontinuous Galerkin solver developed in the same production framework, Loci/THRUST, to accurately resolve and propagate acoustic physics across the entire launch environment. Time-accurate, Hybrid RANS/LES CFD modeling is applied for predicting the acoustic generation physics at the plume source, and a high-order accurate unstructured discontinuous Galerkin (DG) method is employed to propagate acoustic waves away from the source across large distances using high-order accurate schemes. The DG solver is capable of solving 2nd, 3rd, and 4th order Euler solutions for non-linear, conservative acoustic field propagation. Initial application testing and validation has been carried out against high resolution acoustic data from the Ares Scale Model Acoustic Test (ASMAT) series to evaluate the capabilities and production readiness of the CFD/CAA system to resolve the observed spectrum of acoustic frequency content. This paper presents results from this validation and outlines efforts to mature and improve the computational simulation framework.

  7. Respiratory effort correction strategies to improve the reproducibility of lung expansion measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Kaifang; Reinhardt, Joseph M.; Christensen, Gary E.

    2013-12-15

    Purpose: Four-dimensional computed tomography (4DCT) can be used to make measurements of pulmonary function longitudinally. The sensitivity of such measurements to identify change depends on measurement uncertainty. Previously, intrasubject reproducibility of Jacobian-based measures of lung tissue expansion was studied in two repeat prior-RT 4DCT human acquisitions. Difference in respiratory effort such as breathing amplitude and frequency may affect longitudinal function assessment. In this study, the authors present normalization schemes that correct ventilation images for variations in respiratory effort and assess the reproducibility improvement after effort correction.Methods: Repeat 4DCT image data acquired within a short time interval from 24 patients priormore » to radiation therapy (RT) were used for this analysis. Using a tissue volume preserving deformable image registration algorithm, Jacobian ventilation maps in two scanning sessions were computed and compared on the same coordinate for reproducibility analysis. In addition to computing the ventilation maps from end expiration to end inspiration, the authors investigated the effort normalization strategies using other intermediated inspiration phases upon the principles of equivalent tidal volume (ETV) and equivalent lung volume (ELV). Scatter plots and mean square error of the repeat ventilation maps and the Jacobian ratio map were generated for four conditions: no effort correction, global normalization, ETV, and ELV. In addition, gamma pass rate was calculated from a modified gamma index evaluation between two ventilation maps, using acceptance criterions of 2 mm distance-to-agreement and 5% ventilation difference.Results: The pattern of regional pulmonary ventilation changes as lung volume changes. All effort correction strategies improved reproducibility when changes in respiratory effort were greater than 150 cc (p < 0.005 with regard to the gamma pass rate). Improvement of reproducibility was correlated with respiratory effort difference (R = 0.744 for ELV in the cohort with tidal volume difference greater than 100 cc). In general for all subjects, global normalization, ETV and ELV significantly improved reproducibility compared to no effort correction (p = 0.009, 0.002, 0.005 respectively). When tidal volume difference was small (less than 100 cc), none of the three effort correction strategies improved reproducibility significantly (p = 0.52, 0.46, 0.46 respectively). For the cohort (N = 13) with tidal volume difference greater than 100 cc, the average gamma pass rate improves from 57.3% before correction to 66.3% after global normalization, and 76.3% after ELV. ELV was found to be significantly better than global normalization (p = 0.04 for all subjects, and p = 0.003 for the cohort with tidal volume difference greater than 100 cc).Conclusions: All effort correction strategies improve the reproducibility of the authors' pulmonary ventilation measures, and the improvement of reproducibility is highly correlated with the changes in respiratory effort. ELV gives better results as effort difference increase, followed by ETV, then global. However, based on the spatial and temporal heterogeneity in the lung expansion rate, a single scaling factor (e.g., global normalization) appears to be less accurate to correct the ventilation map when changes in respiratory effort are large.« less

  8. An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Watson, Willie R. (Technical Monitor); Tam, Christopher

    2004-01-01

    This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.

  9. HPC Insights, Fall 2011

    DTIC Science & Technology

    2011-01-01

    Simulating Satellite Tracking Using Parallel Computing By Andrew Lindstrom ,University of Hawaii at Hilo — Mentors: Carl Holmberg, Maui High Performance...RDECOM) and his management team, RDECOM Deputy Director Gary Martin ; ARL Director John Miller; Communications- Electronics Research, Development...Saves Resources By Mike Knowles, ARL DSRC Site Lead, Lockheed Martin mode instead of full power down. The first phase of the EAS effort is an attempt

  10. Integrated Multidisciplinary Design of High Pressure Multistage Compressor Systems (la Conception integree des compresseurs multi-etage a haute performance)

    DTIC Science & Technology

    1998-09-01

    development ONERA and SNECMA and is described in [Nicoud, 91]. This methodology [ Karadimas , 1997]. The aim of all efforts method solves the Quasi-3D...computer power, 1994 2-11 BERTHILLIER, M., DUPONT, C., MONDAL, R., KARADIMAS , G. : New Ways for the Design the BARRAU, J.J. : Blade Forced Response

  11. Portent of Heine's Reciprocal Square Root Identity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohl, H W

    Precise efforts in theoretical astrophysics are needed to fully understand the mechanisms that govern the structure, stability, dynamics, formation, and evolution of differentially rotating stars. Direct computation of the physical attributes of a star can be facilitated by the use of highly compact azimuthal and separation angle Fourier formulations of the Green's functions for the linear partial differential equations of mathematical physics.

  12. Constructing Scientific Applications from Heterogeneous Resources

    NASA Technical Reports Server (NTRS)

    Schichting, Richard D.

    1995-01-01

    A new model for high-performance scientific applications in which such applications are implemented as heterogeneous distributed programs or, equivalently, meta-computations, is investigated. The specific focus of this grant was a collaborative effort with researchers at NASA and the University of Toledo to test and improve Schooner, a software interconnection system, and to explore the benefits of increased user interaction with existing scientific applications.

  13. Final Scientific/Technical Report for "Enabling Exascale Hardware and Software Design through Scalable System Virtualization"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinda, Peter August

    2015-03-17

    This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems softwaremore » for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3), although a prototype connecting Palacios with the GEM5 architectural simulator was demonstrated, our conclusion was that such a platform was less useful for design space exploration than anticipated due to inherent complexity of the connection between the instruction set architecture level and the microarchitectural level. For effort (4), we found that a code injection approach proved to be more fruitful. The results of our efforts are publicly available in the open source Palacios codebase and published papers, all of which are available from the project web site, v3vee.org. Palacios is currently one of the two codebases (the other being Sandia’s Kitten lightweight kernel) that underlies the node operating system for the DOE Hobbes Project, one of two projects tasked with building a systems software prototype for the national exascale computing effort.« less

  14. Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities

    NASA Astrophysics Data System (ADS)

    Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu

    Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.

  15. TIPTOPbase: the Iron Project and the Opacity Project Atomic Database

    NASA Astrophysics Data System (ADS)

    Mendoza, Claudio; Nahar, Sultana; Pradhan, Anil; Seaton, Micheal; Zeippen, Claude

    2001-05-01

    The Opacity Project, the IRON Project, and the RmaX Network (The Opacity Project Team, Vol.1,2), IOPP, Bristol (1995,1996); Hummer et al., Astron. Astrophys. 279, 298 (1993) are international computational efforts concerned with the production of high quality atomic data for astrophysical applications. Research groups from Canada, France, Germany, UK, USA and Venezuela are involved. Extensive data sets containing accurate energy levels, f-values, A-values, photoionisation cross sections, collision strengths, recombination rates, and opacitites have been computed for cosmically abundant elements using state-of-the-art atomic physics codes. Their volume, completeness and overall accuracy are presently unmatched in the field of laboratory astrophysics. Some of the data sets have been available since 1993 from a public on-line database service referred to as TOPbase (Cunto et al Astron. Astrophys. 275), L5 (1993), ( http://cdsweb.u-strasbg.fr/OP.html at CDS France, and http://heasarc.gsfc.nasa.gov/topbase, at NSAS USA). We are currently involved in a major effort to scale the existing database services to develop a robust platform for the high-profile dissemination of atomic data to the scientific community within the next 12 months. (Partial support from the NSF and NASA is acknowledged.)

  16. Overview of the TriBITS Lifecycle Model: Lean/Agile Software Lifecycle Model for Research-based Computational Science and Engineering Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe A; Heroux, Dr. Michael A; Willenbring, James

    2012-01-01

    Software lifecycles are becoming an increasingly important issue for computational science & engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process--respecting the competing needs of research vs. production--cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects thatmore » are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less

  17. A preliminary design study for a cosmic X-ray spectrometer

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The results are described of theoretical and experimental investigations aimed at the development of a curved crystal cosmic X-ray spectrometer to be used at the focal plane of the large orbiting X-ray telescope on the third High Energy Astronomical Observatory. The effort was concentrated on the development of spectrometer concepts and their evaluation by theoretical analysis, computer simulation, and laboratory testing with breadboard arrangements of crystals and detectors. In addition, a computer-controlled facility for precision testing and evaluation of crystals in air and vacuum was constructed. A summary of research objectives and results is included.

  18. A 3D-PIV System for Gas Turbine Applications

    NASA Astrophysics Data System (ADS)

    Acharya, Sumanta

    2002-08-01

    Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware, which was purchased, and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after the summary describes the computer cluster, its setup, and some cluster hardware, and presents highlights of those efforts since installation of the cluster.

  19. The Human Genome Project: Information access, management, and regulation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McInerney, J.D.; Micikas, L.B.

    The Human Genome Project is a large, internationally coordinated effort in biological research directed at creating a detailed map of human DNA. This report describes the access of information, management, and regulation of the project. The project led to the development of an instructional module titled The Human Genome Project: Biology, Computers, and Privacy, designed for use in high school biology classes. The module consists of print materials and both Macintosh and Windows versions of related computer software-Appendix A contains a copy of the print materials and discs containing the two versions of the software.

  20. Automated Estimation Of Software-Development Costs

    NASA Technical Reports Server (NTRS)

    Roush, George B.; Reini, William

    1993-01-01

    COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.

  1. A Roadmap for caGrid, an Enterprise Grid Architecture for Biomedical Research

    PubMed Central

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Hong, Neil Chue

    2012-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG™) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities. PMID:18560123

  2. A roadmap for caGrid, an enterprise Grid architecture for biomedical research.

    PubMed

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Chue Hong, Neil

    2008-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.

  3. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  4. Hierarchical parallelisation of functional renormalisation group calculations - hp-fRG

    NASA Astrophysics Data System (ADS)

    Rohe, Daniel

    2016-10-01

    The functional renormalisation group (fRG) has evolved into a versatile tool in condensed matter theory for studying important aspects of correlated electron systems. Practical applications of the method often involve a high numerical effort, motivating the question in how far High Performance Computing (HPC) can leverage the approach. In this work we report on a multi-level parallelisation of the underlying computational machinery and show that this can speed up the code by several orders of magnitude. This in turn can extend the applicability of the method to otherwise inaccessible cases. We exploit three levels of parallelisation: Distributed computing by means of Message Passing (MPI), shared-memory computing using OpenMP, and vectorisation by means of SIMD units (single-instruction-multiple-data). Results are provided for two distinct High Performance Computing (HPC) platforms, namely the IBM-based BlueGene/Q system JUQUEEN and an Intel Sandy-Bridge-based development cluster. We discuss how certain issues and obstacles were overcome in the course of adapting the code. Most importantly, we conclude that this vast improvement can actually be accomplished by introducing only moderate changes to the code, such that this strategy may serve as a guideline for other researcher to likewise improve the efficiency of their codes.

  5. Examination of Wildland Fire Spread at Small Scales Using Direct Numerical Simulations and High-Speed Laser Diagnostics

    NASA Astrophysics Data System (ADS)

    Wimer, N. T.; Mackoweicki, A. S.; Poludnenko, A. Y.; Hoffman, C.; Daily, J. W.; Rieker, G. B.; Hamlington, P.

    2017-12-01

    Results are presented from a joint computational and experimental research effort focused on understanding and characterizing wildland fire spread at small scales (roughly 1m-1mm) using direct numerical simulations (DNS) with chemical kinetics mechanisms that have been calibrated using data from high-speed laser diagnostics. The simulations are intended to directly resolve, with high physical accuracy, all small-scale fluid dynamic and chemical processes relevant to wildland fire spread. The high fidelity of the simulations is enabled by the calibration and validation of DNS sub-models using data from high-speed laser diagnostics. These diagnostics have the capability to measure temperature and chemical species concentrations, and are used here to characterize evaporation and pyrolysis processes in wildland fuels subjected to an external radiation source. The chemical kinetics code CHEMKIN-PRO is used to study and reduce complex reaction mechanisms for water removal, pyrolysis, and gas phase combustion during solid biomass burning. Simulations are then presented for a gaseous pool fire coupled with the resulting multi-step chemical reaction mechanisms, and the results are connected to the fundamental structure and spread of wildland fires. It is anticipated that the combined computational and experimental approach of this research effort will provide unprecedented access to information about chemical species, temperature, and turbulence during the entire pyrolysis, evaporation, ignition, and combustion process, thereby permitting more complete understanding of the physics that must be represented by coarse-scale numerical models of wildland fire spread.

  6. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  7. High Tech Programmers in Low-Income Communities: Creating a Computer Culture in a Community Technology Center

    NASA Astrophysics Data System (ADS)

    Kafai, Yasmin B.; Peppler, Kylie A.; Chiu, Grace M.

    For the last twenty years, issues of the digital divide have driven efforts around the world to address the lack of access to computers and the Internet, pertinent and language appropriate content, and technical skills in low-income communities (Schuler & Day, 2004a and b). The title of our paper makes reference to a milestone publication (Schon, Sanyal, & Mitchell, 1998) that showcased some of the early work and thinking in this area. Schon, Sanyal and Mitchell's book edition included an article outlining the Computer Clubhouse, a type of community technology center model, which was developed to create opportunities for youth in low-income communities to become creators and designers of technologies by (1998). The model has been very successful scaling up, with over 110 Computer Clubhouses now in existence worldwide.

  8. Computational modeling of brain tumors: discrete, continuum or hybrid?

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Deisboeck, Thomas S.

    In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silico brain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.

  9. Computational modeling of brain tumors: discrete, continuum or hybrid?

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Deisboeck, Thomas S.

    2008-04-01

    In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silicobrain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.

  10. System support software for the Space Ultrareliable Modular Computer (SUMC)

    NASA Technical Reports Server (NTRS)

    Hill, T. E.; Hintze, G. C.; Hodges, B. C.; Austin, F. A.; Buckles, B. P.; Curran, R. T.; Lackey, J. D.; Payne, R. E.

    1974-01-01

    The highly transportable programming system designed and implemented to support the development of software for the Space Ultrareliable Modular Computer (SUMC) is described. The SUMC system support software consists of program modules called processors. The initial set of processors consists of the supervisor, the general purpose assembler for SUMC instruction and microcode input, linkage editors, an instruction level simulator, a microcode grid print processor, and user oriented utility programs. A FORTRAN 4 compiler is undergoing development. The design facilitates the addition of new processors with a minimum effort and provides the user quasi host independence on the ground based operational software development computer. Additional capability is provided to accommodate variations in the SUMC architecture without consequent major modifications in the initial processors.

  11. Fresnel-region fields and antenna noise-temperature calculations for advanced microwave sounding units

    NASA Technical Reports Server (NTRS)

    Schmidt, R. F.

    1982-01-01

    A transition from the antenna noise temperature formulation for extended noise sources in the far-field or Fraunhofer-region of an antenna to one of the intermediate near field or Fresnel-region is discussed. The effort is directed toward microwave antenna simulations and high-speed digital computer analysis of radiometric sounding units used to obtain water vapor and temperature profiles of the atmosphere. Fresnel-region fields are compared at various distances from the aperture. The antenna noise temperature contribution of an annular noise source is computed in the Fresnel-region (D squared/16 lambda) for a 13.2 cm diameter offset-paraboloid aperture at 60 GHz. The time-average Poynting vector is used to effect the computation.

  12. ATLAS and LHC computing on CRAY

    NASA Astrophysics Data System (ADS)

    Sciacca, F. G.; Haug, S.; ATLAS Collaboration

    2017-10-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  13. Climate Science Performance, Data and Productivity on Titan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, Benjamin W; Worley, Patrick H; Gaddis, Abigail L

    2015-01-01

    Climate Science models are flagship codes for the largest of high performance computing (HPC) resources, both in visibility, with the newly launched Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) effort, and in terms of significant fractions of system usage. The performance of the DOE ACME model is captured with application level timers and examined through a sizeable run archive. Performance and variability of compute, queue time and ancillary services are examined. As Climate Science advances in the use of HPC resources there has been an increase in the required human and data systems to achieve programs goals.more » A description of current workflow processes (hardware, software, human) and planned automation of the workflow, along with historical and projected data in motion and at rest data usage, are detailed. The combination of these two topics motivates a description of future systems requirements for DOE Climate Modeling efforts, focusing on the growth of data storage and network and disk bandwidth required to handle data at an acceptable rate.« less

  14. LOX/LH2 vane pump for auxiliary propulsion systems

    NASA Technical Reports Server (NTRS)

    Hemminger, J. A.; Ulbricht, T. E.

    1985-01-01

    Positive displacement pumps offer potential efficiency advantages over centrifugal pumps for future low thrust space missions. Low flow rate applications, such as space station auxiliary propulsion or dedicated low thrust orbiter transfer vehicles, are typical of missions where low flow and high head rise challenge centrifugal pumps. The positive displacement vane pump for pumping of LOX and LH2 is investigated. This effort has included: (1) a testing program in which pump performance was investigated for differing pump clearances and for differing pump materials while pumping LN2, LOX, and LH2; and (2) an analysis effort, in which a comprehensive pump performance analysis computer code was developed and exercised. An overview of the theoretical framework of the performance analysis computer code is presented, along with a summary of analysis results. Experimental results are presented for pump operating in liquid nitrogen. Included are data on the effects on pump performance of pump clearance, speed, and pressure rise. Pump suction performance is also presented.

  15. A statistical framework to predict functional non-coding regions in the human genome through integrated analysis of annotation data.

    PubMed

    Lu, Qiongshi; Hu, Yiming; Sun, Jiehuan; Cheng, Yuwei; Cheung, Kei-Hoi; Zhao, Hongyu

    2015-05-27

    Identifying functional regions in the human genome is a major goal in human genetics. Great efforts have been made to functionally annotate the human genome either through computational predictions, such as genomic conservation, or high-throughput experiments, such as the ENCODE project. These efforts have resulted in a rich collection of functional annotation data of diverse types that need to be jointly analyzed for integrated interpretation and annotation. Here we present GenoCanyon, a whole-genome annotation method that performs unsupervised statistical learning using 22 computational and experimental annotations thereby inferring the functional potential of each position in the human genome. With GenoCanyon, we are able to predict many of the known functional regions. The ability of predicting functional regions as well as its generalizable statistical framework makes GenoCanyon a unique and powerful tool for whole-genome annotation. The GenoCanyon web server is available at http://genocanyon.med.yale.edu.

  16. Advanced 3D Characterization and Reconstruction of Reactor Materials FY16 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fromm, Bradley; Hauch, Benjamin; Sridharan, Kumar

    2016-12-01

    A coordinated effort to link advanced materials characterization methods and computational modeling approaches is critical to future success for understanding and predicting the behavior of reactor materials that operate at extreme conditions. The difficulty and expense of working with nuclear materials have inhibited the use of modern characterization techniques on this class of materials. Likewise, mesoscale simulation efforts have been impeded due to insufficient experimental data necessary for initialization and validation of the computer models. The objective of this research is to develop methods to integrate advanced materials characterization techniques developed for reactor materials with state-of-the-art mesoscale modeling and simulationmore » tools. Research to develop broad-ion beam sample preparation, high-resolution electron backscatter diffraction, and digital microstructure reconstruction techniques; and methods for integration of these techniques into mesoscale modeling tools are detailed. Results for both irradiated and un-irradiated reactor materials are presented for FY14 - FY16 and final remarks are provided.« less

  17. Imaging and Analysis of Void-defects in Solder Joints Formed in Reduced Gravity using High-Resolution Computed Tomography

    NASA Technical Reports Server (NTRS)

    Easton, John W.; Struk, Peter M.; Rotella, Anthony

    2008-01-01

    As a part of efforts to develop an electronics repair capability for long duration space missions, techniques and materials for soldering components on a circuit board in reduced gravity must be developed. This paper presents results from testing solder joint formation in low gravity on a NASA Reduced Gravity Research Aircraft. The results presented include joints formed using eutectic tin-lead solder and one of the following fluxes: (1) a no-clean flux core, (2) a rosin flux core, and (3) a solid solder wire with external liquid no-clean flux. The solder joints are analyzed with a computed tomography (CT) technique which imaged the interior of the entire solder joint. This replaced an earlier technique that required the solder joint to be destructively ground down revealing a single plane which was subsequently analyzed. The CT analysis technique is described and results presented with implications for future testing as well as implications for the overall electronics repair effort discussed.

  18. Quasiparticle band structure of rocksalt-CdO determined using maximally localized Wannier functions.

    PubMed

    Dixit, H; Lamoen, D; Partoens, B

    2013-01-23

    CdO in the rocksalt structure is an indirect band gap semiconductor. Thus, in order to determine its band gap one needs to calculate the complete band structure. However, in practice, the exact evaluation of the quasiparticle band structure for the large number of k-points which constitute the different symmetry lines in the Brillouin zone can be an extremely demanding task compared to the standard density functional theory (DFT) calculation. In this paper we report the full quasiparticle band structure of CdO using a plane-wave pseudopotential approach. In order to reduce the computational effort and time, we make use of maximally localized Wannier functions (MLWFs). The MLWFs offer a highly accurate method for interpolation of the DFT or GW band structure from a coarse k-point mesh in the irreducible Brillouin zone, resulting in a much reduced computational effort. The present paper discusses the technical details of the scheme along with the results obtained for the quasiparticle band gap and the electron effective mass.

  19. Prosocial apathy for helping others when effort is required

    PubMed Central

    Lockwood, Patricia L.; Hamonet, Mathilde; Zhang, Samuel H.; Ratnavel, Anya; Salmony, Florentine U.; Husain, Masud; Apps, Matthew A. J.

    2017-01-01

    Summary Prosocial acts – those that are costly to ourselves but benefit others – are a central component of human co-existence1–3. While the financial and moral costs of prosocial behaviours are well understood4–6, everyday prosocial acts do not typically come at such costs. Instead, they require effort. Here, using computational modelling of an effort-based task we show that people are prosocially apathetic. They are less willing to choose to initiate highly effortful acts that benefit others compared to benefitting themselves. Moreover, even when choosing to initiate effortful prosocial acts, people show superficiality, exerting less force into actions that benefit others than themselves. These findings replicated, were present when the other was anonymous or not, and when choices were made to earn rewards or avoid losses. Importantly, the least prosocially motivated people had higher subclinical levels of psychopathy and social apathy. Thus, although people sometimes ‘help out’, they are less motivated to benefit others and sometimes ‘superficially prosocial’, which may characterise everyday prosociality and its disruption in social disorders. PMID:28819649

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Wooley; Herbert S. Lin

    This study is the first comprehensive NRC study that suggests a high-level intellectual structure for Federal agencies for supporting work at the biology/computing interface. The report seeks to establish the intellectual legitimacy of a fundamentally cross-disciplinary collaboration between biologists and computer scientists. That is, while some universities are increasingly favorable to research at the intersection, life science researchers at other universities are strongly impeded in their efforts to collaborate. This report addresses these impediments and describes proven strategies for overcoming them. An important feature of the report is the use of well-documented examples that describe clearly to individuals not trainedmore » in computer science the value and usage of computing across the biological sciences, from genes and proteins to networks and pathways, from organelles to cells, and from individual organisms to populations and ecosystems. It is hoped that these examples will be useful to students in the life sciences to motivate (continued) study in computer science that will enable them to be more facile users of computing in their future biological studies.« less

  1. Mirror neurons and imitation: a computationally guided review.

    PubMed

    Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael

    2006-04-01

    Neurophysiology reveals the properties of individual mirror neurons in the macaque while brain imaging reveals the presence of 'mirror systems' (not individual neurons) in the human. Current conceptual models attribute high level functions such as action understanding, imitation, and language to mirror neurons. However, only the first of these three functions is well-developed in monkeys. We thus distinguish current opinions (conceptual models) on mirror neuron function from more detailed computational models. We assess the strengths and weaknesses of current computational models in addressing the data and speculations on mirror neurons (macaque) and mirror systems (human). In particular, our mirror neuron system (MNS), mental state inference (MSI) and modular selection and identification for control (MOSAIC) models are analyzed in more detail. Conceptual models often overlook the computational requirements for posited functions, while too many computational models adopt the erroneous hypothesis that mirror neurons are interchangeable with imitation ability. Our meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.

  2. Computational efficiency for the surface renewal method

    NASA Astrophysics Data System (ADS)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  3. A Survey of Techniques for Approximate Computing

    DOE PAGES

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  4. Automated vector selection of SIVQ and parallel computing integration MATLAB™: Innovations supporting large-scale and high-throughput image analysis studies.

    PubMed

    Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J

    2011-01-01

    Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.

  5. MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank Mueller

    2009-02-05

    MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less

  6. Unsupervised Learning Through Randomized Algorithms for High-Volume High-Velocity Data (ULTRA-HV).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinar, Ali; Kolda, Tamara G.; Carlberg, Kevin Thomas

    Through long-term investments in computing, algorithms, facilities, and instrumentation, DOE is an established leader in massive-scale, high-fidelity simulations, as well as science-leading experimentation. In both cases, DOE is generating more data than it can analyze and the problem is intensifying quickly. The need for advanced algorithms that can automatically convert the abundance of data into a wealth of useful information by discovering hidden structures is well recognized. Such efforts however, are hindered by the massive volume of the data and its high velocity. Here, the challenge is developing unsupervised learning methods to discover hidden structure in high-volume, high-velocity data.

  7. 34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2012-07-01 2012-07-01 false How does the Secretary compute maintenance of effort in...

  8. 34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2013-07-01 2013-07-01 false How does the Secretary compute maintenance of effort in...

  9. 34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2011-07-01 2011-07-01 false How does the Secretary compute maintenance of effort in...

  10. 34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2014-07-01 2014-07-01 false How does the Secretary compute maintenance of effort in...

  11. 34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary compute maintenance of effort in...

  12. Integrating computation into the undergraduate curriculum: A vision and guidelines for future developments

    NASA Astrophysics Data System (ADS)

    Chonacky, Norman; Winch, David

    2008-04-01

    There is substantial evidence of a need to make computation an integral part of the undergraduate physics curriculum. This need is consistent with data from surveys in both the academy and the workplace, and has been reinforced by two years of exploratory efforts by a group of physics faculty for whom computation is a special interest. We have examined past and current efforts at reform and a variety of strategic, organizational, and institutional issues involved in any attempt to broadly transform existing practice. We propose a set of guidelines for development based on this past work and discuss our vision of computationally integrated physics.

  13. Measuring and Modeling Change in Examinee Effort on Low-Stakes Tests across Testing Occasions

    ERIC Educational Resources Information Center

    Sessoms, John; Finney, Sara J.

    2015-01-01

    Because schools worldwide use low-stakes tests to make important decisions, value-added indices computed from test scores must accurately reflect student learning, which requires equal test-taking effort across testing occasions. Evaluating change in effort assumes effort is measured equivalently across occasions. We evaluated the longitudinal…

  14. Microcosm to Cosmos: The Growth of a Divisional Computer Network

    PubMed Central

    Johannes, R.S.; Kahane, Stephen N.

    1987-01-01

    In 1982, we reported the deployment of a network of microcomputers in the Division of Gastroenterology[1]. This network was based upon Corvus Systems Omninet®. Corvus was one of the very first firms to offer networking products for PC's. This PC development occurred coincident with the planning phase of the Johns Hopkins Hospital's multisegment ethernet project. A rich communications infra-structure is now in place at the Johns Hopkins Medical Institutions[2,3]. Shortly after the hospital development under the direction of the Operational and Clinical Systems Division (OCS) development began, the Johns Hopkins School of Medicine began an Integrated Academic Information Management Systems (IAIMS) planning effort. We now present a model that uses aspects of all three planning efforts (PC networks, Hospital Information Systems & IAIMS) to build a divisional computing facility. This facility is viewed as a terminal leaf on then institutional network diagram. Nevertheless, it is noteworthy that this leaf, the divisional resource in the Division of Gastroenterology (GASNET), has a rich substructure and functionality of its own, perhaps revealing the recursive nature of network architecture. The current status, design and function of the GASNET computational facility is discussed. Among the major positive aspects of this design are the sharing and centralization of MS-DOS software, the high-speed DOS/Unix link that makes available most of the our institution's computing resources.

  15. First-principles data-driven discovery of transition metal oxides for artificial photosynthesis

    NASA Astrophysics Data System (ADS)

    Yan, Qimin

    We develop a first-principles data-driven approach for rapid identification of transition metal oxide (TMO) light absorbers and photocatalysts for artificial photosynthesis using the Materials Project. Initially focusing on Cr, V, and Mn-based ternary TMOs in the database, we design a broadly-applicable multiple-layer screening workflow automating density functional theory (DFT) and hybrid functional calculations of bulk and surface electronic and magnetic structures. We further assess the electrochemical stability of TMOs in aqueous environments from computed Pourbaix diagrams. Several promising earth-abundant low band-gap TMO compounds with desirable band edge energies and electrochemical stability are identified by our computational efforts and then synergistically evaluated using high-throughput synthesis and photoelectrochemical screening techniques by our experimental collaborators at Caltech. Our joint theory-experiment effort has successfully identified new earth-abundant copper and manganese vanadate complex oxides that meet highly demanding requirements for photoanodes, substantially expanding the known space of such materials. By integrating theory and experiment, we validate our approach and develop important new insights into structure-property relationships for TMOs for oxygen evolution photocatalysts, paving the way for use of first-principles data-driven techniques in future applications. This work is supported by the Materials Project Predictive Modeling Center and the Joint Center for Artificial Photosynthesis through the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-05CH11231. Computational resources also provided by the Department of Energy through the National Energy Supercomputing Center.

  16. The Repeated Replacement Method: A Pure Lagrangian Meshfree Method for Computational Fluid Dynamics

    PubMed Central

    Walker, Wade A.

    2012-01-01

    In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids’ tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly “chops out” fluid from active areas and replaces it with new “flattened” fluid cells with the same mass, momentum, and energy. We call the new cells “flattened” because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175

  17. Computational Infrastructure for Geodynamics (CIG)

    NASA Astrophysics Data System (ADS)

    Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.

    2004-12-01

    Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to leverage and develop long-term strategic partnerships with open source development efforts within the larger thrusts of scientific computing and geoinformatics. These strategic partnerships are essential as the frontier has moved into multi-scale and multi-physics problems in which many investigators now want to use simulation software for data interpretation, data assimilation, and hypothesis testing.

  18. Loss of Productivity Due to Neck/Shoulder Symptoms and Hand/Arm Symptoms: Results from the PROMO-Study

    PubMed Central

    IJmker, Stefan; Blatter, Birgitte M.; de Korte, Elsbeth M.

    2007-01-01

    Introduction The objective of the present study is to describe the extent of productivity loss among computer workers with neck/shoulder symptoms and hand/arm symptoms, and to examine associations between pain intensity, various physical and psychosocial factors and productivity loss in computer workers with neck/shoulder and hand/arm symptoms. Methods A cross-sectional design was used. The study population consisted of 654 computer workers with neck/shoulder or hand/arm symptoms from five different companies. Descriptive statistics were used to describe the occurrence of self-reported productivity loss. Logistic regression analyses were used to examine the associations. Results In 26% of all the cases reporting symptoms, productivity loss was involved, the most often in cases reporting both symptoms (36%). Productivity loss involved sickness absence in 11% of the arm/hand cases, 32% of the neck/shoulder cases and 43% of the cases reporting both symptoms. The multivariate analyses showed statistically significant odds ratios for pain intensity (OR: 1.26; CI: 1.12–1.41), for high effort/no low reward (OR: 2.26; CI: 1.24–4.12), for high effort/low reward (OR: 1.95; CI: 1.09–3.50), and for low job satisfaction (OR: 3.10; CI: 1.44–6.67). Physical activity in leisure time, full-time work and overcommitment were not associated with productivity loss. Conclusion In most computer workers with neck/shoulder symptoms or hand/arm symptoms productivity loss derives from a decreased performance at work and not from sickness absence. Favorable psychosocial work characteristics might prevent productivity loss in symptomatic workers. PMID:17636455

  19. Role of information systems in controlling costs: the electronic medical record (EMR) and the high-performance computing and communications (HPCC) efforts

    NASA Astrophysics Data System (ADS)

    Kun, Luis G.

    1994-12-01

    On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.

  20. Materials Genome Initiative

    NASA Technical Reports Server (NTRS)

    Vickers, John

    2015-01-01

    The Materials Genome Initiative (MGI) project element is a cross-Center effort that is focused on the integration of computational tools to simulate manufacturing processes and materials behavior. These computational simulations will be utilized to gain understanding of processes and materials behavior to accelerate process development and certification to more efficiently integrate new materials in existing NASA projects and to lead to the design of new materials for improved performance. This NASA effort looks to collaborate with efforts at other government agencies and universities working under the national MGI. MGI plans to develop integrated computational/experimental/ processing methodologies for accelerating discovery and insertion of materials to satisfy NASA's unique mission demands. The challenges include validated design tools that incorporate materials properties, processes, and design requirements; and materials process control to rapidly mature emerging manufacturing methods and develop certified manufacturing processes

  1. Advanced dendritic web growth development and development of single-crystal silicon dendritic ribbon and high-efficiency solar cell program

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.

    1986-01-01

    Efforts to demonstrate that the dendritic web technology is ready for commercial use by the end of 1986 continues. A commercial readiness goal involves improvements to crystal growth furnace throughput to demonstrate an area growth rate of greater than 15 sq cm/min while simultaneously growing 10 meters or more of ribbon under conditions of continuous melt replenishment. Continuous means that the silicon melt is being replenished at the same rate that it is being consumed by ribbon growth so that the melt level remains constant. Efforts continue on computer thermal modeling required to define high speed, low stress, continuous growth configurations; the study of convective effects in the molten silicon and growth furnace cover gas; on furnace component modifications; on web quality assessments; and on experimental growth activities.

  2. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  3. Toward A Simulation-Based Tool for the Treatment of Vocal Fold Paralysis

    PubMed Central

    Mittal, Rajat; Zheng, Xudong; Bhardwaj, Rajneesh; Seo, Jung Hee; Xue, Qian; Bielamowicz, Steven

    2011-01-01

    Advances in high-performance computing are enabling a new generation of software tools that employ computational modeling for surgical planning. Surgical management of laryngeal paralysis is one area where such computational tools could have a significant impact. The current paper describes a comprehensive effort to develop a software tool for planning medialization laryngoplasty where a prosthetic implant is inserted into the larynx in order to medialize the paralyzed vocal fold (VF). While this is one of the most common procedures used to restore voice in patients with VF paralysis, it has a relatively high revision rate, and the tool being developed is expected to improve surgical outcomes. This software tool models the biomechanics of airflow-induced vibration in the human larynx and incorporates sophisticated approaches for modeling the turbulent laryngeal flow, the complex dynamics of the VFs, as well as the production of voiced sound. The current paper describes the key elements of the modeling approach, presents computational results that demonstrate the utility of the approach and also describes some of the limitations and challenges. PMID:21556320

  4. Computational Thermodynamics and Kinetics-Based ICME Framework for High-Temperature Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Arróyave, Raymundo; Talapatra, Anjana; Johnson, Luke; Singh, Navdeep; Ma, Ji; Karaman, Ibrahim

    2015-11-01

    Over the last decade, considerable interest in the development of High-Temperature Shape Memory Alloys (HTSMAs) for solid-state actuation has increased dramatically as key applications in the aerospace and automotive industry demand actuation temperatures well above those of conventional SMAs. Most of the research to date has focused on establishing the (forward) connections between chemistry, processing, (micro)structure, properties, and performance. Much less work has been dedicated to the development of frameworks capable of addressing the inverse problem of establishing necessary chemistry and processing schedules to achieve specific performance goals. Integrated Computational Materials Engineering (ICME) has emerged as a powerful framework to address this problem, although it has yet to be applied to the development of HTSMAs. In this paper, the contributions of computational thermodynamics and kinetics to ICME of HTSMAs are described. Some representative examples of the use of computational thermodynamics and kinetics to understand the phase stability and microstructural evolution in HTSMAs are discussed. Some very recent efforts at combining both to assist in the design of HTSMAs and limitations to the full implementation of ICME frameworks for HTSMA development are presented.

  5. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    NASA Astrophysics Data System (ADS)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.

    2017-02-01

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.

  6. High-performance implementation of Chebyshev filter diagonalization for interior eigenvalue computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de

    2016-11-15

    We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less

  7. Early BHs: simulations and observations

    NASA Astrophysics Data System (ADS)

    Cappelluti, Nico; di-Matteo, Tiziana; Schawinski, Kevin; Fragos, Tassos

    We report recent investigations in the field of Early Black Holes. We summarize recent theoretical and observational efforts to understand how Black Holes formed and eventually evolved into Super Massive Black Holes at high-z. This paper makes use of state of the art computer simulations and multiwavelength surveys. Although non conclusive, we present results and hypothesis that pose exciting challenges to modern astrophysics and to future facilities.

  8. The Lag Model, a Turbulence Model for Wall Bounded Flows Including Separation

    NASA Technical Reports Server (NTRS)

    Olsen, Michael E.; Coakley, Thomas J.; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A new class of turbulence model is described for wall bounded, high Reynolds number flows. A specific turbulence model is demonstrated, with results for favorable and adverse pressure gradient flowfields. Separation predictions are as good or better than either Spalart Almaras or SST models, do not require specification of wall distance, and have similar or reduced computational effort compared with these models.

  9. Advanced Concepts Theory Annual Report 1983.

    DTIC Science & Technology

    1984-05-18

    variety of theoretical models, tools, and computational strategies to understand, guide, and predict the behavior of high brightness, laboratory x-ray... theoretical models must treat hard and soft x-ray emission from different electron configurations with K, L, and M shells, and they must include... theoretical effort has basis for comprehending the trends which appear in the been devoted to elucidating the effects of opacity on the numerical results

  10. Multiphysics Thrust Chamber Modeling for Nuclear Thermal Propulsion

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Cheng, Gary; Chen, Yen-Sen

    2006-01-01

    The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation. A two-pronged approach is employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of heat transfer on thrust performance. Preliminary results on both aspects are presented.

  11. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  12. Resource Balancing Control Allocation

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Bodson, Marc

    2010-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.

  13. A CFD Database for Airfoils and Wings at Post-Stall Angles of Attack

    NASA Technical Reports Server (NTRS)

    Petrilli, Justin; Paul, Ryan; Gopalarathnam, Ashok; Frink, Neal T.

    2013-01-01

    This paper presents selected results from an ongoing effort to develop an aerodynamic database from Reynolds-Averaged Navier-Stokes (RANS) computational analysis of airfoils and wings at stall and post-stall angles of attack. The data obtained from this effort will be used for validation and refinement of a low-order post-stall prediction method developed at NCSU, and to fill existing gaps in high angle of attack data in the literature. Such data could have potential applications in post-stall flight dynamics, helicopter aerodynamics and wind turbine aerodynamics. An overview of the NASA TetrUSS CFD package used for the RANS computational approach is presented. Detailed results for three airfoils are presented to compare their stall and post-stall behavior. The results for finite wings at stall and post-stall conditions focus on the effects of taper-ratio and sweep angle, with particular attention to whether the sectional flows can be approximated using two-dimensional flow over a stalled airfoil. While this approximation seems reasonable for unswept wings even at post-stall conditions, significant spanwise flow on stalled swept wings preclude the use of two-dimensional data to model sectional flows on swept wings. Thus, further effort is needed in low-order aerodynamic modeling of swept wings at stalled conditions.

  14. Tabletop computed lighting for practical digital photography.

    PubMed

    Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby

    2007-01-01

    We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.

  15. Understanding Slat Noise Sources

    NASA Technical Reports Server (NTRS)

    Khorrami, Medhi R.

    2003-01-01

    Model-scale aeroacoustic tests of large civil transports point to the leading-edge slat as a dominant high-lift noise source in the low- to mid-frequencies during aircraft approach and landing. Using generic multi-element high-lift models, complementary experimental and numerical tests were carefully planned and executed at NASA in order to isolate slat noise sources and the underlying noise generation mechanisms. In this paper, a brief overview of the supporting computational effort undertaken at NASA Langley Research Center, is provided. Both tonal and broadband aspects of slat noise are discussed. Recent gains in predicting a slat s far-field acoustic noise, current shortcomings of numerical simulations, and other remaining open issues, are presented. Finally, an example of the ever-expanding role of computational simulations in noise reduction studies also is given.

  16. Center for Computational Structures Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Perry, Ferman W.

    1995-01-01

    The Center for Computational Structures Technology (CST) is intended to serve as a focal point for the diverse CST research activities. The CST activities include the use of numerical simulation and artificial intelligence methods in modeling, analysis, sensitivity studies, and optimization of flight-vehicle structures. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The key elements of the Center are: (1) conducting innovative research on advanced topics of CST; (2) acting as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); (3) strong collaboration with NASA scientists and researchers from universities and other government laboratories; and (4) rapid dissemination of CST to industry, through integration of industrial personnel into the ongoing research efforts.

  17. Hypersonic Shock Wave Computations Using the Generalized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Agarwal, Ramesh; Chen, Rui; Cheremisin, Felix G.

    2006-11-01

    Hypersonic shock structure in diatomic gases is computed by solving the Generalized Boltzmann Equation (GBE), where the internal and translational degrees of freedom are considered in the framework of quantum and classical mechanics respectively [1]. The computational framework available for the standard Boltzmann equation [2] is extended by including both the rotational and vibrational degrees of freedom in the GBE. There are two main difficulties encountered in computation of high Mach number flows of diatomic gases with internal degrees of freedom: (1) a large velocity domain is needed for accurate numerical description of the distribution function resulting in enormous computational effort in calculation of the collision integral, and (2) about 50 energy levels are needed for accurate representation of the rotational spectrum of the gas. Our methodology addresses these problems, and as a result the efficiency of calculations has increased by several orders of magnitude. The code has been validated by computing the shock structure in Nitrogen for Mach numbers up to 25 including the translational and rotational degrees of freedom. [1] Beylich, A., ``An Interlaced System for Nitrogen Gas,'' Proc. of CECAM Workshop, ENS de Lyon, France, 2000. [2] Cheremisin, F., ``Solution of the Boltzmann Kinetic Equation for High Speed Flows of a Rarefied Gas,'' Proc. of the 24th Int. Symp. on Rarefied Gas Dynamics, Bari, Italy, 2004.

  18. 34 CFR 403.185 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOCATIONAL AND APPLIED TECHNOLOGY EDUCATION PROGRAM What Financial Conditions Must Be Met by a State? § 403... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary compute maintenance of effort in the event of a waiver? 403.185 Section 403.185 Education Regulations of the Offices of the Department...

  19. Large-Scale NASA Science Applications on the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2005-01-01

    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  20. Message Passing and Shared Address Space Parallelism on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Singh, Jaswinder P.; Oliker, Leonid; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Currently, message passing (MP) and shared address space (SAS) are the two leading parallel programming paradigms. MP has been standardized with MPI, and is the more common and mature approach; however, code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and the programming effort required for six applications under both programming models on a 32-processor PC-SMP cluster, a platform that is becoming increasingly attractive for high-end scientific computing. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and/or complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications, while being competitive for the others. A hybrid MPI+SAS strategy shows only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented.

  1. A community effort to protect genomic data sharing, collaboration and outsourcing.

    PubMed

    Wang, Shuang; Jiang, Xiaoqian; Tang, Haixu; Wang, Xiaofeng; Bu, Diyue; Carey, Knox; Dyke, Stephanie Om; Fox, Dov; Jiang, Chao; Lauter, Kristin; Malin, Bradley; Sofia, Heidi; Telenti, Amalio; Wang, Lei; Wang, Wenhao; Ohno-Machado, Lucila

    2017-01-01

    The human genome can reveal sensitive information and is potentially re-identifiable, which raises privacy and security concerns about sharing such data on wide scales. In 2016, we organized the third Critical Assessment of Data Privacy and Protection competition as a community effort to bring together biomedical informaticists, computer privacy and security researchers, and scholars in ethical, legal, and social implications (ELSI) to assess the latest advances on privacy-preserving techniques for protecting human genomic data. Teams were asked to develop novel protection methods for emerging genome privacy challenges in three scenarios: Track (1) data sharing through the Beacon service of the Global Alliance for Genomics and Health. Track (2) collaborative discovery of similar genomes between two institutions; and Track (3) data outsourcing to public cloud services. The latter two tracks represent continuing themes from our 2015 competition, while the former was new and a response to a recently established vulnerability. The winning strategy for Track 1 mitigated the privacy risk by hiding approximately 11% of the variation in the database while permitting around 160,000 queries, a significant improvement over the baseline. The winning strategies in Tracks 2 and 3 showed significant progress over the previous competition by achieving multiple orders of magnitude performance improvement in terms of computational runtime and memory requirements. The outcomes suggest that applying highly optimized privacy-preserving and secure computation techniques to safeguard genomic data sharing and analysis is useful. However, the results also indicate that further efforts are needed to refine these techniques into practical solutions.

  2. Computational Fluid Dynamics Technology for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2003-01-01

    Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.

  3. Multidisciplinary Design Technology Development: A Comparative Investigation of Integrated Aerospace Vehicle Design Tools

    NASA Technical Reports Server (NTRS)

    Renaud, John E.; Batill, Stephen M.; Brockman, Jay B.

    1999-01-01

    This research effort is a joint program between the Departments of Aerospace and Mechanical Engineering and the Computer Science and Engineering Department at the University of Notre Dame. The purpose of the project was to develop a framework and systematic methodology to facilitate the application of Multidisciplinary Design Optimization (MDO) to a diverse class of system design problems. For all practical aerospace systems, the design of a systems is a complex sequence of events which integrates the activities of a variety of discipline "experts" and their associated "tools". The development, archiving and exchange of information between these individual experts is central to the design task and it is this information which provides the basis for these experts to make coordinated design decisions (i.e., compromises and trade-offs) - resulting in the final product design. Grant efforts focused on developing and evaluating frameworks for effective design coordination within a MDO environment. Central to these research efforts was the concept that the individual discipline "expert", using the most appropriate "tools" available and the most complete description of the system should be empowered to have the greatest impact on the design decisions and final design. This means that the overall process must be highly interactive and efficiently conducted if the resulting design is to be developed in a manner consistent with cost and time requirements. The methods developed as part of this research effort include; extensions to a sensitivity based Concurrent Subspace Optimization (CSSO) NMO algorithm; the development of a neural network response surface based CSSO-MDO algorithm; and the integration of distributed computing and process scheduling into the MDO environment. This report overviews research efforts in each of these focus. A complete bibliography of research produced with support of this grant is attached.

  4. Recombination of open-f-shell tungsten ions

    NASA Astrophysics Data System (ADS)

    Krantz, C.; Badnell, N. R.; Müller, A.; Schippers, S.; Wolf, A.

    2017-03-01

    We review experimental and theoretical efforts aimed at a detailed understanding of the recombination of electrons with highly charged tungsten ions characterised by an open 4f sub-shell. Highly charged tungsten occurs as a plasma contaminant in ITER-like tokamak experiments, where it acts as an unwanted cooling agent. Modelling of the charge state populations in a plasma requires reliable thermal rate coefficients for charge-changing electron collisions. The electron recombination of medium-charged tungsten species with open 4f sub-shells is especially challenging to compute reliably. Storage-ring experiments have been conducted that yielded recombination rate coefficients at high energy resolution and well-understood systematics. Significant deviations compared to simplified, but prevalent, computational models have been found. A new class of ab initio numerical calculations has been developed that provides reliable predictions of the total plasma recombination rate coefficients for these ions.

  5. Portable color multimedia training systems based on monochrome laptop computers (CBT-in-a-briefcase), with spinoff implications for video uplink and downlink in spaceflight operations

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1994-01-01

    This report describes efforts to use digital motion video compression technology to develop a highly portable device that would convert 1990-91 era IBM-compatible and/or MacIntosh notebook computers into full-color, motion-video capable multimedia training systems. An architecture was conceived that would permit direct conversion of existing laser-disk-based multimedia courses with little or no reauthoring. The project did not physically demonstrate certain critical video keying techniques, but their implementation should be feasible. This investigation of digital motion video has spawned two significant spaceflight projects at MSFC: one to downlink multiple high-quality video signals from Spacelab, and the other to uplink videoconference-quality video in realtime and high quality video off-line, plus investigate interactive, multimedia-based techniques for enhancing onboard science operations. Other airborne or spaceborne spinoffs are possible.

  6. Modeling Methodologies for Design and Control of Solid Oxide Fuel Cell APUs

    NASA Astrophysics Data System (ADS)

    Pianese, C.; Sorrentino, M.

    2009-08-01

    Among the existing fuel cell technologies, Solid Oxide Fuel Cells (SOFC) are particularly suitable for both stationary and mobile applications, due to their high energy conversion efficiencies, modularity, high fuel flexibility, low emissions and noise. Moreover, the high working temperatures enable their use for efficient cogeneration applications. SOFCs are entering in a pre-industrial era and a strong interest for designing tools has growth in the last years. Optimal system configuration, components sizing, control and diagnostic system design require computational tools that meet the conflicting needs of accuracy, affordable computational time, limited experimental efforts and flexibility. The paper gives an overview on control-oriented modeling of SOFC at both single cell and stack level. Such an approach provides useful simulation tools for designing and controlling SOFC-APUs destined to a wide application area, ranging from automotive to marine and airplane APUs.

  7. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  8. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  9. Assessment of Chair-side Computer-Aided Design and Computer-Aided Manufacturing Restorations: A Review of the Literature

    PubMed Central

    Baroudi, Kusai; Ibraheem, Shukran Nasser

    2015-01-01

    Background: This paper aimed to evaluate the application of computer-aided design and computer-aided manufacturing (CAD-CAM) technology and the factors that affect the survival of restorations. Materials and Methods: A thorough literature search using PubMed, Medline, Embase, Science Direct, Wiley Online Library and Grey literature were performed from the year 2004 up to June 2014. Only relevant research was considered. Results: The use of chair-side CAD/CAM systems is promising in all dental branches in terms of minimizing time and effort made by dentists, technicians and patients for restoring and maintaining patient oral function and aesthetic, while providing high quality outcome. Conclusion: The way of producing and placing the restorations made with the chair-side CAD/CAM (CEREC and E4D) devices is better than restorations made by conventional laboratory procedures. PMID:25954082

  10. Organizational management practices for achieving software process improvement

    NASA Technical Reports Server (NTRS)

    Kandt, Ronald Kirk

    2004-01-01

    The crisis in developing software has been known for over thirty years. Problems that existed in developing software in the early days of computing still exist today. These problems include the delivery of low-quality products, actual development costs that exceed expected development costs, and actual development time that exceeds expected development time. Several solutions have been offered to overcome out inability to deliver high-quality software, on-time and within budget. One of these solutions involves software process improvement. However, such efforts often fail because of organizational management issues. This paper discusses business practices that organizations should follow to improve their chances of initiating and sustaining successful software process improvement efforts.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gharibyan, N.

    In order to fully characterize the NIF neutron spectrum, SAND-II-SNL software was requested/received from the Radiation Safety Information Computational Center. The software is designed to determine the neutron energy spectrum through analysis of experimental activation data. However, given that the source code was developed in Sparcstation 10, it is not compatible with current version of FORTRAN. Accounts have been established through the Lawrence Livermore National Laboratory’s High Performance Computing in order to access different compiles for FORTRAN (e.g. pgf77, pgf90). Additionally, several of the subroutines included in the SAND-II-SNL package have required debugging efforts to allow for proper compiling ofmore » the code.« less

  12. An open-source computational and data resource to analyze digital maps of immunopeptidomes

    DOE PAGES

    Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J.; ...

    2015-07-08

    We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies.

  13. Introduction to Computational Methods for Stability and Control (COMSAC)

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Fremaux, C. Michael; Chambers, Joseph R.

    2004-01-01

    This Symposium is intended to bring together the often distinct cultures of the Stability and Control (S&C) community and the Computational Fluid Dynamics (CFD) community. The COMSAC program is itself a new effort by NASA Langley to accelerate the application of high end CFD methodologies to the demanding job of predicting stability and control characteristics of aircraft. This talk is intended to set the stage for needing a program like COMSAC. It is not intended to give details of the program itself. The topics include: 1) S&C Challenges; 2) Aero prediction methodology; 3) CFD applications; 4) NASA COMSAC planning; 5) Objectives of symposium; and 6) Closing remarks.

  14. Combining Computational and Social Effort for Collaborative Problem Solving

    PubMed Central

    Wagy, Mark D.; Bongard, Josh C.

    2015-01-01

    Rather than replacing human labor, there is growing evidence that networked computers create opportunities for collaborations of people and algorithms to solve problems beyond either of them. In this study, we demonstrate the conditions under which such synergy can arise. We show that, for a design task, three elements are sufficient: humans apply intuitions to the problem, algorithms automatically determine and report back on the quality of designs, and humans observe and innovate on others’ designs to focus creative and computational effort on good designs. This study suggests how such collaborations should be composed for other domains, as well as how social and computational dynamics mutually influence one another during collaborative problem solving. PMID:26544199

  15. System-Level Virtualization Research at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J

    2010-01-01

    System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less

  16. Application of infrared thermography in computer aided diagnosis

    NASA Astrophysics Data System (ADS)

    Faust, Oliver; Rajendra Acharya, U.; Ng, E. Y. K.; Hong, Tan Jen; Yu, Wenwei

    2014-09-01

    The invention of thermography, in the 1950s, posed a formidable problem to the research community: What is the relationship between disease and heat radiation captured with Infrared (IR) cameras? The research community responded with a continuous effort to find this crucial relationship. This effort was aided by advances in processing techniques, improved sensitivity and spatial resolution of thermal sensors. However, despite this progress fundamental issues with this imaging modality still remain. The main problem is that the link between disease and heat radiation is complex and in many cases even non-linear. Furthermore, the change in heat radiation as well as the change in radiation pattern, which indicate disease, is minute. On a technical level, this poses high requirements on image capturing and processing. On a more abstract level, these problems lead to inter-observer variability and on an even more abstract level they lead to a lack of trust in this imaging modality. In this review, we adopt the position that these problems can only be solved through a strict application of scientific principles and objective performance assessment. Computing machinery is inherently objective; this helps us to apply scientific principles in a transparent way and to assess the performance results. As a consequence, we aim to promote thermography based Computer-Aided Diagnosis (CAD) systems. Another benefit of CAD systems comes from the fact that the diagnostic accuracy is linked to the capability of the computing machinery and, in general, computers become ever more potent. We predict that a pervasive application of computers and networking technology in medicine will help us to overcome the shortcomings of any single imaging modality and this will pave the way for integrated health care systems which maximize the quality of patient care.

  17. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-05-04

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less

  18. Final Report. Institute for Ultralscale Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu; Galli, Giulia; Gygi, Francois

    The SciDAC Institute for Ultrascale Visualization brought together leading experts from visualization, high-performance computing, and science application areas to make advanced visualization solutions for SciDAC scientists and the broader community. Over the five-year project, the Institute introduced many new enabling visualization techniques, which have significantly enhanced scientists’ ability to validate their simulations, interpret their data, and communicate with others about their work and findings. This Institute project involved a large number of junior and student researchers, who received the opportunities to work on some of the most challenging science applications and gain access to the most powerful high-performance computing facilitiesmore » in the world. They were readily trained and prepared for facing the greater challenges presented by extreme-scale computing. The Institute’s outreach efforts, through publications, workshops and tutorials, successfully disseminated the new knowledge and technologies to the SciDAC and the broader scientific communities. The scientific findings and experience of the Institute team helped plan the SciDAC3 program.« less

  19. Computational Modeling in Structural Materials Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.

  20. Power API Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-12-04

    The software serves two purposes. The first purpose of the software is to prototype the Sandia High Performance Computing Power Application Programming Interface Specification effort. The specification can be found at http://powerapi.sandia.gov . Prototypes of the specification were developed in parallel with the development of the specification. Release of the prototype will be instructive to anyone who intends to implement the specification. More specifically, our vendor collaborators will benefit from the availability of the prototype. The second is in direct support of the PowerInsight power measurement device, which was co-developed with Penguin Computing. The software provides a cluster wide measurementmore » capability enabled by the PowerInsight device. The software can be used by anyone who purchases a PowerInsight device. The software will allow the user to easily collect power and energy information of a node that is instrumented with PowerInsight. The software can also be used as an example prototype implementation of the High Performance Computing Power Application Programming Interface Specification.« less

  1. Validation of High-Fidelity CFD/CAA Framework for Launch Vehicle Acoustic Environment Simulation against Scale Model Test Data

    NASA Technical Reports Server (NTRS)

    Liever, Peter A.; West, Jeffrey S.; Harris, Robert E.

    2016-01-01

    A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed for launch vehicle liftoff acoustic environment predictions. The framework couples the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate Discontinuous Galerkin solver developed in the same production framework, Loci/THRUST, to accurately resolve and propagate acoustic physics across the entire launch environment. Time-accurate, Hybrid RANS/LES CFD modeling is applied for predicting the acoustic generation physics at the plume source, and a high-order accurate unstructured mesh Discontinuous Galerkin (DG) method is employed to propagate acoustic waves away from the source across large distances using high-order accurate schemes. The DG solver is capable of solving 2nd, 3rd, and 4th order Euler solutions for non-linear, conservative acoustic field propagation. Initial application testing and validation has been carried out against high resolution acoustic data from the Ares Scale Model Acoustic Test (ASMAT) series to evaluate the capabilities and production readiness of the CFD/CAA system to resolve the observed spectrum of acoustic frequency content. This paper presents results from this validation and outlines efforts to mature and improve the computational simulation framework.

  2. Eric Bonnema | NREL

    Science.gov Websites

    contributes to the research efforts for commercial buildings. This effort is dedicated to studying the , commercial sector whole-building energy simulation, scientific computing, and software configuration and

  3. High pressure water jet cutting and stripping

    NASA Technical Reports Server (NTRS)

    Hoppe, David T.; Babai, Majid K.

    1991-01-01

    High pressure water cutting techniques have a wide range of applications to the American space effort. Hydroblasting techniques are commonly used during the refurbishment of the reusable solid rocket motors. The process can be controlled to strip a thermal protective ablator without incurring any damage to the painted surface underneath by using a variation of possible parameters. Hydroblasting is a technique which is easily automated. Automation removes personnel from the hostile environment of the high pressure water. Computer controlled robots can perform the same task in a fraction of the time that would be required by manual operation.

  4. Simulation of High-Beta Plasma Confinement

    NASA Astrophysics Data System (ADS)

    Font, Gabriel; Welch, Dale; Mitchell, Robert; McGuire, Thomas

    2017-10-01

    The Lockheed Martin Compact Fusion Reactor concept utilizes magnetic cusps to confine the plasma. In order to minimize losses through the axial and ring cusps, the plasma is pushed to a high-beta state. Simulations were made of the plasma and magnetic field system in an effort to quantify particle confinement times and plasma behavior characteristics. Computations are carried out with LSP using implicit PIC methods. Simulations of different sub-scale geometries at high-Beta fusion conditions are used to determine particle loss scaling with reactor size, plasma conditions, and gyro radii. ©2017 Lockheed Martin Corporation. All Rights Reserved.

  5. Direct Energy Conversion for Nuclear Propulsion at Low Specific Mass

    NASA Technical Reports Server (NTRS)

    Scott, John H.

    2014-01-01

    The project will continue the FY13 JSC IR&D (October-2012 to September-2013) effort in Travelling Wave Direct Energy Conversion (TWDEC) in order to demonstrate its potential as the core of a high potential, game-changing, in-space propulsion technology. The TWDEC concept converts particle beam energy into radio frequency (RF) alternating current electrical power, such as can be used to heat the propellant in a plasma thruster. In a more advanced concept (explored in the Phase 1 NIAC project), the TWDEC could also be utilized to condition the particle beam such that it may transfer directed kinetic energy to a target propellant plasma for the purpose of increasing thrust and optimizing the specific impulse. The overall scope of the FY13 first-year effort was to build on both the 2012 Phase 1 NIAC research and the analysis and test results produced by Japanese researchers over the past twenty years to assess the potential for spacecraft propulsion applications. The primary objective of the FY13 effort was to create particle-in-cell computer simulations of a TWDEC. Other objectives included construction of a breadboard TWDEC test article, preliminary test calibration of the simulations, and construction of first order power system models to feed into mission architecture analyses with COPERNICUS tools. Due to funding cuts resulting from the FY13 sequestration, only the computer simulations and assembly of the breadboard test article were completed. The simulations, however, are of unprecedented flexibility and precision and were presented at the 2013 AIAA Joint Propulsion Conference. Also, the assembled test article will provide an ion current density two orders of magnitude above that available in previous Japanese experiments, thus enabling the first direct measurements of power generation from a TWDEC for FY14. The proposed FY14 effort will use the test article for experimental validation of the computer simulations and thus complete to a greater fidelity the mission analysis products originally conceived for FY13.

  6. A Big Data Platform for Storing, Accessing, Mining and Learning Geospatial Data

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Bambacus, M.; Duffy, D.; Little, M. M.

    2017-12-01

    Big Data is becoming a norm in geoscience domains. A platform that is capable to effiently manage, access, analyze, mine, and learn the big data for new information and knowledge is desired. This paper introduces our latest effort on developing such a platform based on our past years' experiences on cloud and high performance computing, analyzing big data, comparing big data containers, and mining big geospatial data for new information. The platform includes four layers: a) the bottom layer includes a computing infrastructure with proper network, computer, and storage systems; b) the 2nd layer is a cloud computing layer based on virtualization to provide on demand computing services for upper layers; c) the 3rd layer is big data containers that are customized for dealing with different types of data and functionalities; d) the 4th layer is a big data presentation layer that supports the effient management, access, analyses, mining and learning of big geospatial data.

  7. High-Resolution X-Ray and Neutron Computed Tomography of an Engine Combustion Network Spray G Gasoline Injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duke, Daniel J.; Finney, Charles E. A.; Kastengren, Alan

    Given the importance of the fuel-injection process on the combustion and emissions performance of gasoline direct injected engines, there has been significant recent interest in understanding the fluid dynamics within the injector, particularly around the needle and through the nozzles. Furthermore, the pressure losses and transients that occur in the flow passages above the needle are also of interest. Simulations of these injectors typically use the nominal design geometry, which does not always match the production geometry. Computed tomography (CT) using x-ray and neutron sources can be used to obtain the real geometry from production injectors, but there are trade-offsmore » in using these techniques. X-ray CT provides high resolution, but cannot penetrate through the thicker parts of the injector. Neutron CT has excellent penetrating power but lower resolution. Here, we present results from a joint effort to characterize a gasoline direct injector representative of the Spray G injector as defined by the Engine Combustion Network. High-resolution (1.2 to 3 µm) x-ray CT measurements from the Advanced Photon Source at Argonne National Laboratory were combined with moderate-resolution (40 µm) neutron CT measurements from the High Flux Isotope Reactor at Oak Ridge National Laboratory to generate a complete internal geometry for the injector. This effort combined the strengths of both facilities’ capabilities, with extremely fine spatially resolved features in the nozzles and injector tips and fine resolution of internal features of the needle along the length of injector. Analysis of the resulting surface model of the internal fluid flow volumes of the injector reveals how the internal cross-sectional area and nozzle hole geometry differs slightly from the design dimensions. A simplified numerical simulation of the internal flow shows how deviations from the design geometry can alter the flow inside the sac and holes. Our results of this study will provide computational modelers with very accurate solid and surface models for use in computational fluid dynamics studies and experimentalists with increased insight into the operating characteristics of their injectors.« less

  8. High-Resolution X-Ray and Neutron Computed Tomography of an Engine Combustion Network Spray G Gasoline Injector

    DOE PAGES

    Duke, Daniel J.; Finney, Charles E. A.; Kastengren, Alan; ...

    2017-03-14

    Given the importance of the fuel-injection process on the combustion and emissions performance of gasoline direct injected engines, there has been significant recent interest in understanding the fluid dynamics within the injector, particularly around the needle and through the nozzles. Furthermore, the pressure losses and transients that occur in the flow passages above the needle are also of interest. Simulations of these injectors typically use the nominal design geometry, which does not always match the production geometry. Computed tomography (CT) using x-ray and neutron sources can be used to obtain the real geometry from production injectors, but there are trade-offsmore » in using these techniques. X-ray CT provides high resolution, but cannot penetrate through the thicker parts of the injector. Neutron CT has excellent penetrating power but lower resolution. Here, we present results from a joint effort to characterize a gasoline direct injector representative of the Spray G injector as defined by the Engine Combustion Network. High-resolution (1.2 to 3 µm) x-ray CT measurements from the Advanced Photon Source at Argonne National Laboratory were combined with moderate-resolution (40 µm) neutron CT measurements from the High Flux Isotope Reactor at Oak Ridge National Laboratory to generate a complete internal geometry for the injector. This effort combined the strengths of both facilities’ capabilities, with extremely fine spatially resolved features in the nozzles and injector tips and fine resolution of internal features of the needle along the length of injector. Analysis of the resulting surface model of the internal fluid flow volumes of the injector reveals how the internal cross-sectional area and nozzle hole geometry differs slightly from the design dimensions. A simplified numerical simulation of the internal flow shows how deviations from the design geometry can alter the flow inside the sac and holes. Our results of this study will provide computational modelers with very accurate solid and surface models for use in computational fluid dynamics studies and experimentalists with increased insight into the operating characteristics of their injectors.« less

  9. Study of the Use of Time-Mean Vortices to Generate Lift for MAV Applications

    DTIC Science & Technology

    2011-05-31

    microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters (geometry, frequency, amplitude of oscillation, etc...issue involved. Towards this end, a suspended microplate was fabricated via MEMS technology and driven to in-plane resonance via Lorentz force...force to drive the suspended MEMS-based microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters

  10. The Argonne Leadership Computing Facility 2010 annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drugan, C.

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued tomore » provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale-class computers by a factor of a thousand. Pete Beckman, who served as the ALCF's Director for the past few years, has been named director of the newly created Exascale Technology and Computing Institute (ETCi). The institute will focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems. Just as Pete's leadership propelled the ALCF to great success, we know that that ETCi will benefit immensely from his expertise and experience. Without question, the future of supercomputing is certainly in good hands. I would like to thank Pete for all his effort over the past two years, during which he oversaw the establishing of ALCF2, the deployment of the Magellan project, increases in utilization, availability, and number of projects using ALCF1. He managed the rapid growth of ALCF staff and made the facility what it is today. All the staff and users are better for Pete's efforts.« less

  11. A General Approach to Measuring Test-Taking Effort on Computer-Based Tests

    ERIC Educational Resources Information Center

    Wise, Steven L.; Gao, Lingyun

    2017-01-01

    There has been an increased interest in the impact of unmotivated test taking on test performance and score validity. This has led to the development of new ways of measuring test-taking effort based on item response time. In particular, Response Time Effort (RTE) has been shown to provide an assessment of effort down to the level of individual…

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, C. V.; Mendez, A. J.

    This was a collaborative effort between Lawrence Livermore National Security, LLC (formerly The Regents of the University of California)/Lawrence Livermore National Laboratory (LLNL) and Mendez R & D Associates (MRDA) to develop and demonstrate a reconfigurable and cost effective design for optical code division multiplexing (O-CDM) with high spectral efficiency and throughput, as applied to the field of distributed computing, including multiple accessing (sharing of communication resources) and bidirectional data distribution in fiber-to-the-premise (FTTx) networks.

  13. Integrated structure/control design - Present methodology and future opportunities

    NASA Technical Reports Server (NTRS)

    Weisshaar, T. A.; Newsom, J. R.; Zeiler, T. A.; Gilbert, M. G.

    1986-01-01

    Attention is given to current methodology applied to the integration of the optimal design process for structures and controls. Multilevel linear decomposition techniques proved to be most effective in organizing the computational efforts necessary for ISCD (integrated structures and control design) tasks. With the development of large orbiting space structures and actively controlled, high performance aircraft, there will be more situations in which this concept can be applied.

  14. Accelerating Adverse Outcome Pathway Development Using ...

    EPA Pesticide Factsheets

    The adverse outcome pathway (AOP) concept links molecular perturbations with organism and population-level outcomes to support high-throughput toxicity testing. International efforts are underway to define AOPs and store the information supporting these AOPs in a central knowledgebase, however, this process is currently labor-intensive and time-consuming. Publicly available data sources provide a wealth of information that could be used to define computationally-predicted AOPs (cpAOPs), which could serve as a basis for creating expert-derived AOPs in a much more efficient way. Computational tools for mining large datasets provide the means for extracting and organizing the information captured in these public data sources. Using cpAOPs as a starting point for expert-derived AOPs should accelerate AOP development. Coupling this with tools to coordinate and facilitate the expert development efforts will increase the number and quality of AOPs produced, which should play a key role in advancing the adoption of twenty-first century toxicity testing strategies. This review article describes how effective knowledge management and automated approaches to AOP development can enhance and accelerate the development and use of AOPs. As the principles documented in this review are put into practice, we anticipate that the quality and quantity of AOPs available will increase substantially. This, in turn, will aid in the interpretation of ToxCast and other high-throughput tox

  15. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE PAGES

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...

    2017-01-24

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  16. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  17. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  18. Flow Control Research at NASA Langley in Support of High-Lift Augmentation

    NASA Technical Reports Server (NTRS)

    Sellers, William L., III; Jones, Gregory S.; Moore, Mark D.

    2002-01-01

    The paper describes the efforts at NASA Langley to apply active and passive flow control techniques for improved high-lift systems, and advanced vehicle concepts utilizing powered high-lift techniques. The development of simplified high-lift systems utilizing active flow control is shown to provide significant weight and drag reduction benefits based on system studies. Active flow control that focuses on separation, and the development of advanced circulation control wings (CCW) utilizing unsteady excitation techniques will be discussed. The advanced CCW airfoils can provide multifunctional controls throughout the flight envelope. Computational and experimental data are shown to illustrate the benefits and issues with implementation of the technology.

  19. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  20. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Astrophysics Data System (ADS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  1. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    PubMed

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The basic objective of this research is to extend the capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. In the efforts related to LES, we were primarily involved with assessing the performance of the various modern methods based on the Probability Density Function (PDF) methods for providing closures for treating the subgrid fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we concentrated on understanding some of the relevant physics of compressible reacting flows by means of statistical analysis of the data generated by DNS of such flows. In the research conducted in the second year of this program, our efforts focused on the modeling of homogeneous compressible turbulent flows by PDF methods, and on DNS of non-equilibrium reacting high speed mixing layers. Some preliminary work is also in progress on PDF modeling of shear flows, and also on LES of such flows.

  3. Session on High Speed Civil Transport Design Capability Using MDO and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Rehder, Joe

    2000-01-01

    Since the inception of CAS in 1992, NASA Langley has been conducting research into applying multidisciplinary optimization (MDO) and high performance computing toward reducing aircraft design cycle time. The focus of this research has been the development of a series of computational frameworks and associated applications that increased in capability, complexity, and performance over time. The culmination of this effort is an automated high-fidelity analysis capability for a high speed civil transport (HSCT) vehicle installed on a network of heterogeneous computers with a computational framework built using Common Object Request Broker Architecture (CORBA) and Java. The main focus of the research in the early years was the development of the Framework for Interdisciplinary Design Optimization (FIDO) and associated HSCT applications. While the FIDO effort was eventually halted, work continued on HSCT applications of ever increasing complexity. The current application, HSCT4.0, employs high fidelity CFD and FEM analysis codes. For each analysis cycle, the vehicle geometry and computational grids are updated using new values for design variables. Processes for aeroelastic trim, loads convergence, displacement transfer, stress and buckling, and performance have been developed. In all, a total of 70 processes are integrated in the analysis framework. Many of the key processes include automatic differentiation capabilities to provide sensitivity information that can be used in optimization. A software engineering process was developed to manage this large project. Defining the interactions among 70 processes turned out to be an enormous, but essential, task. A formal requirements document was prepared that defined data flow among processes and subprocesses. A design document was then developed that translated the requirements into actual software design. A validation program was defined and implemented to ensure that codes integrated into the framework produced the same results as their standalone counterparts. Finally, a Commercial Off the Shelf (COTS) configuration management system was used to organize the software development. A computational environment, CJOPT, based on the Common Object Request Broker Architecture, CORBA, and the Java programming language has been developed as a framework for multidisciplinary analysis and Optimization. The environment exploits the parallelisms inherent in the application and distributes the constituent disciplines on machines best suited to their needs. In CJOpt, a discipline code is "wrapped" as an object. An interface to the object identifies the functionality (services) provided by the discipline, defined in Interface Definition Language (IDL) and implemented using Java. The results of using the HSCT4.0 capability are described. A summary of lessons learned is also presented. The use of some of the processes, codes, and techniques by industry are highlighted. The application of the methodology developed in this research to other aircraft are described. Finally, we show how the experience gained is being applied to entirely new vehicles, such as the Reusable Space Transportation System. Additional information is contained in the original.

  4. Electron tubes for industrial applications

    NASA Astrophysics Data System (ADS)

    Gellert, Bernd

    1994-05-01

    This report reviews research and development efforts within the last years for vacuum electron tubes, in particular power grid tubes for industrial applications. Physical and chemical effects are discussed that determine the performance of todays devices. Due to the progress made in the fundamental understanding of materials and newly developed processes the reliability and reproducibility of power grid tubes could be improved considerably. Modern computer controlled manufacturing methods ensure a high reproducibility of production and continuous quality certification according to ISO 9001 guarantees future high quality standards. Some typical applications of these tubes are given as an example.

  5. Aggregating Data for Computational Toxicology Applications ...

    EPA Pesticide Factsheets

    Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues) with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA) has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource) to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains), ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies), ExpoCastDB (detailed human exposure data from observational studies of selected chemicals), and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways). The EPA DSSTox (Distributed Structure-Searchable Toxicity) program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built usi

  6. Global information infrastructure.

    PubMed

    Lindberg, D A

    1994-01-01

    The High Performance Computing and Communications Program (HPCC) is a multiagency federal initiative under the leadership of the White House Office of Science and Technology Policy, established by the High Performance Computing Act of 1991. It has been assigned a critical role in supporting the international collaboration essential to science and to health care. Goals of the HPCC are to extend USA leadership in high performance computing and networking technologies; to improve technology transfer for economic competitiveness, education, and national security; and to provide a key part of the foundation for the National Information Infrastructure. The first component of the National Institutes of Health to participate in the HPCC, the National Library of Medicine (NLM), recently issued a solicitation for proposals to address a range of issues, from privacy to 'testbed' networks, 'virtual reality,' and more. These efforts will build upon the NLM's extensive outreach program and other initiatives, including the Unified Medical Language System (UMLS), MEDLARS, and Grateful Med. New Internet search tools are emerging, such as Gopher and 'Knowbots'. Medicine will succeed in developing future intelligent agents to assist in utilizing computer networks. Our ability to serve patients is so often restricted by lack of information and knowledge at the time and place of medical decision-making. The new technologies, properly employed, will also greatly enhance our ability to serve the patient.

  7. High resolution ultrasonic spectroscopy system for nondestructive evaluation

    NASA Technical Reports Server (NTRS)

    Chen, C. H.

    1991-01-01

    With increased demand for high resolution ultrasonic evaluation, computer based systems or work stations become essential. The ultrasonic spectroscopy method of nondestructive evaluation (NDE) was used to develop a high resolution ultrasonic inspection system supported by modern signal processing, pattern recognition, and neural network technologies. The basic system which was completed consists of a 386/20 MHz PC (IBM AT compatible), a pulser/receiver, a digital oscilloscope with serial and parallel communications to the computer, an immersion tank with motor control of X-Y axis movement, and the supporting software package, IUNDE, for interactive ultrasonic evaluation. Although the hardware components are commercially available, the software development is entirely original. By integrating signal processing, pattern recognition, maximum entropy spectral analysis, and artificial neural network functions into the system, many NDE tasks can be performed. The high resolution graphics capability provides visualization of complex NDE problems. The phase 3 efforts involve intensive marketing of the software package and collaborative work with industrial sectors.

  8. Computational design of high efficiency release targets for use at ISOL facilities

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Alton, G. D.; Middleton, J. W.

    1999-06-01

    This report describes efforts made at the Oak Ridge National Laboratory to design high-efficiency-release targets that simultaneously incorporate the short diffusion lengths, high permeabilities, controllable temperatures, and heat removal properties required for the generation of useful radioactive ion beam (RIB) intensities for nuclear physics and astrophysics research using the isotope separation on-line (ISOL) technique. Short diffusion lengths are achieved either by using thin fibrous target materials or by coating thin layers of selected target material onto low-density carbon fibers such as reticulated vitreous carbon fiber (RVCF) or carbon-bonded-carbon-fiber (CBCF) to form highly permeable composite target matrices. Computational studies which simulate the generation and removal of primary beam deposited heat from target materials have been conducted to optimize the design of target/heat-sink systems for generating RIBs. The results derived from diffusion release-rate simulation studies for selected targets and thermal analyses of temperature distributions within a prototype target/heat-sink system subjected to primary ion beam irradiation will be presented in this report.

  9. High-efficiency-release targets for use at ISOL facilities: computational design

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Alton, G. D.

    1999-12-01

    This report describes efforts made at the Oak Ridge National Laboratory to design high-efficiency-release targets that simultaneously incorporate the short diffusion lengths, high permeabilities, controllable temperatures, and heat-removal properties required for the generation of useful radioactive ion beam (RIB) intensities for nuclear physics and astrophysics research using the isotope separation on-line (ISOL) technique. Short diffusion lengths are achieved either by using thin fibrous target materials or by coating thin layers of selected target material onto low-density carbon fibers such as reticulated-vitreous-carbon fiber (RVCF) or carbon-bonded-carbon fiber (CBCF) to form highly permeable composite target matrices. Computational studies that simulate the generation and removal of primary beam deposited heat from target materials have been conducted to optimize the design of target/heat-sink systems for generating RIBs. The results derived from diffusion release-rate simulation studies for selected targets and thermal analyses of temperature distributions within a prototype target/heat-sink system subjected to primary ion beam irradiation are presented in this report.

  10. Flow prediction over a transport multi-element high-lift system and comparison with flight measurements

    NASA Technical Reports Server (NTRS)

    Vijgen, P. M. H. W.; Hardin, J. D.; Yip, L. P.

    1992-01-01

    Accurate prediction of surface-pressure distributions, merging boundary-layers, and separated-flow regions over multi-element high-lift airfoils is required to design advanced high-lift systems for efficient subsonic transport aircraft. The availability of detailed measurements of pressure distributions and both averaged and time-dependent boundary-layer flow parameters at flight Reynolds numbers is critical to evaluate computational methods and to model the turbulence structure for closure of the flow equations. Several detailed wind-tunnel measurements at subscale Reynolds numbers were conducted to obtain detailed flow information including the Reynolds-stress component. As part of a subsonic-transport high-lift research program, flight experiments are conducted using the NASA-Langley B737-100 research aircraft to obtain detailed flow characteristics for support of computational and wind-tunnel efforts. Planned flight measurements include pressure distributions at several spanwise locations, boundary-layer transition and separation locations, surface skin friction, as well as boundary-layer profiles and Reynolds stresses in adverse pressure-gradient flow.

  11. Does Cloud Computing in the Atmospheric Sciences Make Sense? A case study of hybrid cloud computing at NASA Langley Research Center

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Chee, T.; Minnis, P.; Spangenberg, D.; Ayers, J. K.; Palikonda, R.; Vakhnin, A.; Dubois, R.; Murphy, P. R.

    2014-12-01

    The processing, storage and dissemination of satellite cloud and radiation products produced at NASA Langley Research Center are key activities for the Climate Science Branch. A constellation of systems operates in sync to accomplish these goals. Because of the complexity involved with operating such intricate systems, there are both high failure rates and high costs for hardware and system maintenance. Cloud computing has the potential to ameliorate cost and complexity issues. Over time, the cloud computing model has evolved and hybrid systems comprising off-site as well as on-site resources are now common. Towards our mission of providing the highest quality research products to the widest audience, we have explored the use of the Amazon Web Services (AWS) Cloud and Storage and present a case study of our results and efforts. This project builds upon NASA Langley Cloud and Radiation Group's experience with operating large and complex computing infrastructures in a reliable and cost effective manner to explore novel ways to leverage cloud computing resources in the atmospheric science environment. Our case study presents the project requirements and then examines the fit of AWS with the LaRC computing model. We also discuss the evaluation metrics, feasibility, and outcomes and close the case study with the lessons we learned that would apply to others interested in exploring the implementation of the AWS system in their own atmospheric science computing environments.

  12. The 6th International Conference on Computer Science and Computational Mathematics (ICCSCM 2017)

    NASA Astrophysics Data System (ADS)

    2017-09-01

    The ICCSCM 2017 (The 6th International Conference on Computer Science and Computational Mathematics) has aimed to provide a platform to discuss computer science and mathematics related issues including Algebraic Geometry, Algebraic Topology, Approximation Theory, Calculus of Variations, Category Theory; Homological Algebra, Coding Theory, Combinatorics, Control Theory, Cryptology, Geometry, Difference and Functional Equations, Discrete Mathematics, Dynamical Systems and Ergodic Theory, Field Theory and Polynomials, Fluid Mechanics and Solid Mechanics, Fourier Analysis, Functional Analysis, Functions of a Complex Variable, Fuzzy Mathematics, Game Theory, General Algebraic Systems, Graph Theory, Group Theory and Generalizations, Image Processing, Signal Processing and Tomography, Information Fusion, Integral Equations, Lattices, Algebraic Structures, Linear and Multilinear Algebra; Matrix Theory, Mathematical Biology and Other Natural Sciences, Mathematical Economics and Financial Mathematics, Mathematical Physics, Measure Theory and Integration, Neutrosophic Mathematics, Number Theory, Numerical Analysis, Operations Research, Optimization, Operator Theory, Ordinary and Partial Differential Equations, Potential Theory, Real Functions, Rings and Algebras, Statistical Mechanics, Structure Of Matter, Topological Groups, Wavelets and Wavelet Transforms, 3G/4G Network Evolutions, Ad-Hoc, Mobile, Wireless Networks and Mobile Computing, Agent Computing & Multi-Agents Systems, All topics related Image/Signal Processing, Any topics related Computer Networks, Any topics related ISO SC-27 and SC- 17 standards, Any topics related PKI(Public Key Intrastructures), Artifial Intelligences(A.I.) & Pattern/Image Recognitions, Authentication/Authorization Issues, Biometric authentication and algorithms, CDMA/GSM Communication Protocols, Combinatorics, Graph Theory, and Analysis of Algorithms, Cryptography and Foundation of Computer Security, Data Base(D.B.) Management & Information Retrievals, Data Mining, Web Image Mining, & Applications, Defining Spectrum Rights and Open Spectrum Solutions, E-Comerce, Ubiquitous, RFID, Applications, Fingerprint/Hand/Biometrics Recognitions and Technologies, Foundations of High-performance Computing, IC-card Security, OTP, and Key Management Issues, IDS/Firewall, Anti-Spam mail, Anti-virus issues, Mobile Computing for E-Commerce, Network Security Applications, Neural Networks and Biomedical Simulations, Quality of Services and Communication Protocols, Quantum Computing, Coding, and Error Controls, Satellite and Optical Communication Systems, Theory of Parallel Processing and Distributed Computing, Virtual Visions, 3-D Object Retrievals, & Virtual Simulations, Wireless Access Security, etc. The success of ICCSCM 2017 is reflected in the received papers from authors around the world from several countries which allows a highly multinational and multicultural idea and experience exchange. The accepted papers of ICCSCM 2017 are published in this Book. Please check http://www.iccscm.com for further news. A conference such as ICCSCM 2017 can only become successful using a team effort, so herewith we want to thank the International Technical Committee and the Reviewers for their efforts in the review process as well as their valuable advices. We are thankful to all those who contributed to the success of ICCSCM 2017. The Secretary

  13. Rational design of an enzyme mutant for anti-cocaine therapeutics

    NASA Astrophysics Data System (ADS)

    Zheng, Fang; Zhan, Chang-Guo

    2008-09-01

    (-)-Cocaine is a widely abused drug and there is no available anti-cocaine therapeutic. The disastrous medical and social consequences of cocaine addiction have made the development of an effective pharmacological treatment a high priority. An ideal anti-cocaine medication would be to accelerate (-)-cocaine metabolism producing biologically inactive metabolites. The main metabolic pathway of cocaine in body is the hydrolysis at its benzoyl ester group. Reviewed in this article is the state-of-the-art computational design of high-activity mutants of human butyrylcholinesterase (BChE) against (-)-cocaine. The computational design of BChE mutants have been based on not only the structure of the enzyme, but also the detailed catalytic mechanisms for BChE-catalyzed hydrolysis of (-)-cocaine and (+)-cocaine. Computational studies of the detailed catalytic mechanisms and the structure-and-mechanism-based computational design have been carried out through the combined use of a variety of state-of-the-art techniques of molecular modeling. By using the computational insights into the catalytic mechanisms, a recently developed unique computational design strategy based on the simulation of the rate-determining transition state has been employed to design high-activity mutants of human BChE for hydrolysis of (-)-cocaine, leading to the exciting discovery of BChE mutants with a considerably improved catalytic efficiency against (-)-cocaine. One of the discovered BChE mutants (i.e., A199S/S287G/A328W/Y332G) has a ˜456-fold improved catalytic efficiency against (-)-cocaine. The encouraging outcome of the computational design and discovery effort demonstrates that the unique computational design approach based on the transition-state simulation is promising for rational enzyme redesign and drug discovery.

  14. Computerizing the Accounting Curriculum.

    ERIC Educational Resources Information Center

    Nash, John F.; England, Thomas G.

    1986-01-01

    Discusses the use of computers in college accounting courses. Argues that the success of new efforts in using computers in teaching accounting is dependent upon increasing instructors' computer skills, and choosing appropriate hardware and software, including commercially available business software packages. (TW)

  15. Computers in Schools: White Boys Only?

    ERIC Educational Resources Information Center

    Hammett, Roberta F.

    1997-01-01

    Discusses the role of computers in today's world and the construction of computer use attitudes, such as gender gaps. Suggests how schools might close the gaps. Includes a brief explanation about how facility with computers is important for women in their efforts to gain equitable treatment in all aspects of their lives. (PA)

  16. Computers and Instruction: Implications of the Rising Tide of Criticism for Reading Education.

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    1988-01-01

    Examines two major reasons that schools have adopted computers without careful prior examination and planning. Surveys a variety of criticisms targeted toward some aspects of computer-based instruction in reading in an effort to direct attention to the beneficial implications of computers in the classroom. (MS)

  17. Preparing Future Secondary Computer Science Educators

    ERIC Educational Resources Information Center

    Ajwa, Iyad

    2007-01-01

    Although nearly every college offers a major in computer science, many computer science teachers at the secondary level have received little formal training. This paper presents details of a project that could make a significant contribution to national efforts to improve computer science education by combining teacher education and professional…

  18. Documentation of probabilistic fracture mechanics codes used for reactor pressure vessels subjected to pressurized thermal shock loading: Parts 1 and 2. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balkey, K.; Witt, F.J.; Bishop, B.A.

    1995-06-01

    Significant attention has been focused on the issue of reactor vessel pressurized thermal shock (PTS) for many years. Pressurized thermal shock transient events are characterized by a rapid cooldown at potentially high pressure levels that could lead to a reactor vessel integrity concern for some pressurized water reactors. As a result of regulatory and industry efforts in the early 1980`s, a probabilistic risk assessment methodology has been established to address this concern. Probabilistic fracture mechanics analyses are performed as part of this methodology to determine conditional probability of significant flaw extension for given pressurized thermal shock events. While recent industrymore » efforts are underway to benchmark probabilistic fracture mechanics computer codes that are currently used by the nuclear industry, Part I of this report describes the comparison of two independent computer codes used at the time of the development of the original U.S. Nuclear Regulatory Commission (NRC) pressurized thermal shock rule. The work that was originally performed in 1982 and 1983 to compare the U.S. NRC - VISA and Westinghouse (W) - PFM computer codes has been documented and is provided in Part I of this report. Part II of this report describes the results of more recent industry efforts to benchmark PFM computer codes used by the nuclear industry. This study was conducted as part of the USNRC-EPRI Coordinated Research Program for reviewing the technical basis for pressurized thermal shock (PTS) analyses of the reactor pressure vessel. The work focused on the probabilistic fracture mechanics (PFM) analysis codes and methods used to perform the PTS calculations. An in-depth review of the methodologies was performed to verify the accuracy and adequacy of the various different codes. The review was structured around a series of benchmark sample problems to provide a specific context for discussion and examination of the fracture mechanics methodology.« less

  19. Multimedia systems overview: the big picture

    NASA Astrophysics Data System (ADS)

    Riccomi, Alfred

    1993-01-01

    The golden opportunities represented by multimedia systems have been recognized by many. The risk and cost involved in developing the products and the markets has led to a bonanza of unlikely consortia of strange bedfellows. The premier promoter of personal computing systems, Apple Computer, has joined forces with the dominant supplier of corporate computing, IBM, to form a multimedia technology joint venture called Kaleida. The consumer electronics world's leading promoter of free trade, Sony, has joined forces with the leader of Europe's protectionist companies, Philips, to create a consumer multimedia standard called CD-I. While still paying lip service to CD-I, Sony and Philips now appear to be going their separate ways. The software world's most profitable/fastest growing firm, Microsoft, has entered into alliances with each and every multimedia competitor to create a mish mash of product classes and defacto standards. The battle for Multimedia Standards is being fought on all fronts: on standards committees, in corporate strategic marketing meetings, within industry associations, in computer retail stores, and on the streets. Early attempts to set proprietary defacto standards were fought back, but the proprietary efforts continue with renewed vigor. Standards committees were, as always, slow to define specifications, but the official standards are now known nd being implemented; ... but the proprietary efforts continue with renewed vigor. Ultimately, the buyers will decide -- like it or not. Success by the efforts to establish proprietary defacto standards could prove to be a boon to the highly creative and inventive U.S. firms, but at the cost of higher prices for consumers and slower market growth. Success by the official standards could bring lower prices for consumers and fast market growth, but force the higher-wage/higher-overhead U.S. firms to compete on a level playing field. As is always the case, you can't have your cake and eat it too.

  20. Computational Modeling of Tissue Self-Assembly

    NASA Astrophysics Data System (ADS)

    Neagu, Adrian; Kosztin, Ioan; Jakab, Karoly; Barz, Bogdan; Neagu, Monica; Jamison, Richard; Forgacs, Gabor

    As a theoretical framework for understanding the self-assembly of living cells into tissues, Steinberg proposed the differential adhesion hypothesis (DAH) according to which a specific cell type possesses a specific adhesion apparatus that combined with cell motility leads to cell assemblies of various cell types in the lowest adhesive energy state. Experimental and theoretical efforts of four decades turned the DAH into a fundamental principle of developmental biology that has been validated both in vitro and in vivo. Based on computational models of cell sorting, we have developed a DAH-based lattice model for tissues in interaction with their environment and simulated biological self-assembly using the Monte Carlo method. The present brief review highlights results on specific morphogenetic processes with relevance to tissue engineering applications. Our own work is presented on the background of several decades of theoretical efforts aimed to model morphogenesis in living tissues. Simulations of systems involving about 105 cells have been performed on high-end personal computers with CPU times of the order of days. Studied processes include cell sorting, cell sheet formation, and the development of endothelialized tubes from rings made of spheroids of two randomly intermixed cell types, when the medium in the interior of the tube was different from the external one. We conclude by noting that computer simulations based on mathematical models of living tissues yield useful guidelines for laboratory work and can catalyze the emergence of innovative technologies in tissue engineering.

  1. Magnetic Skyrmion as a Nonlinear Resistive Element: A Potential Building Block for Reservoir Computing

    NASA Astrophysics Data System (ADS)

    Prychynenko, Diana; Sitte, Matthias; Litzius, Kai; Krüger, Benjamin; Bourianoff, George; Kläui, Mathias; Sinova, Jairo; Everschor-Sitte, Karin

    2018-01-01

    Inspired by the human brain, there is a strong effort to find alternative models of information processing capable of imitating the high energy efficiency of neuromorphic information processing. One possible realization of cognitive computing involves reservoir computing networks. These networks are built out of nonlinear resistive elements which are recursively connected. We propose that a Skyrmion network embedded in magnetic films may provide a suitable physical implementation for reservoir computing applications. The significant key ingredient of such a network is a two-terminal device with nonlinear voltage characteristics originating from magnetoresistive effects, such as the anisotropic magnetoresistance or the recently discovered noncollinear magnetoresistance. The most basic element for a reservoir computing network built from "Skyrmion fabrics" is a single Skyrmion embedded in a ferromagnetic ribbon. In order to pave the way towards reservoir computing systems based on Skyrmion fabrics, we simulate and analyze (i) the current flow through a single magnetic Skyrmion due to the anisotropic magnetoresistive effect and (ii) the combined physics of local pinning and the anisotropic magnetoresistive effect.

  2. Computers for the Faculty: How on a Limited Budget.

    ERIC Educational Resources Information Center

    Arman, Hal; Kostoff, John

    An informal investigation of the use of computers at Delta College (DC) in Michigan revealed reasonable use of computers by faculty in disciplines such as mathematics, business, and technology, but very limited use in the humanities and social sciences. In an effort to increase faculty computer usage, DC decided to make computers available to any…

  3. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Treesearch

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  4. Biomechanics of Head, Neck, and Chest Injury Prevention for Soldiers: Phase 2 and 3

    DTIC Science & Technology

    2016-08-01

    understanding of the biomechanics of the head and brain. Task 2.3 details the computational modeling efforts conducted to evaluate the response of the...section also details the progress made on the development of a testing apparatus to evaluate cervical spine implants in survivable loading scenarios...computational modeling efforts conducted to evaluate the response of the cervical spine and the effects of cervical arthrodesis and arthroplasty during

  5. Limits on fundamental limits to computation.

    PubMed

    Markov, Igor L

    2014-08-14

    An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilat-Lohinger, E.; Bazsó, A.; Funk, B.

    Gravitational perturbations in multi-planet systems caused by an accompanying star are the subject of this investigation. Our dynamical model is based on the binary star HD 41004 AB where a giant planet orbits HD 41004 A. We modify the orbital parameters of this system and analyze the motion of a hypothetical test planet surrounding HD 41004 A on an interior orbit to the detected giant planet. Our numerical computations indicate perturbations due to mean motion and secular resonances (SRs). The locations of these resonances are usually connected to high eccentricity and highly inclined motion depending strongly on the binary-planet architecture.more » As the positions of mean motion resonances can easily be determined, the main purpose of this study is to present a new semi-analytical method to determine the location of an SR without huge computational effort.« less

  7. Basic Modeling of the Solar Atmosphere and Spectrum

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.; Wagner, William J. (Technical Monitor)

    2000-01-01

    During the last three years we have continued the development of extensive computer programs for constructing realistic models of the solar atmosphere and for calculating detailed spectra to use in the interpretation of solar observations. This research involves two major interrelated efforts: work by Avrett and Loeser on the Pandora computer program for optically thick non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed high-resolution synthesis of the solar spectrum using data for over 58 million atomic and molecular lines. Our objective is to construct atmospheric models from which the calculated spectra agree as well as possible with high-and low-resolution observations over a wide wavelength range. Such modeling leads to an improved understanding of the physical processes responsible for the structure and behavior of the atmosphere.

  8. Design considerations for a 10-kW integrated hydrogen-oxygen regenerative fuel cell system

    NASA Technical Reports Server (NTRS)

    Hoberecht, M. A.; Miller, T. B.; Rieker, L. L.; Gonzalez-Sanabria, O. D.

    1984-01-01

    Integration of an alkaline fuel cell subsystem with an alkaline electrolysis subsystem to form a regenerative fuel cell (RFC) system for low earth orbit (LEO) applications characterized by relatively high overall round trip electrical efficiency, long life, and high reliability is possible with present state of the art technology. A hypothetical 10 kW system computer modeled and studied based on data from ongoing contractual efforts in both the alkaline fuel cell and alkaline water electrolysis areas. The alkaline fuel cell technology is under development utilizing advanced cell components and standard Shuttle Orbiter system hardware. The alkaline electrolysis technology uses a static water vapor feed technique and scaled up cell hardware is developed. The computer aided study of the performance, operating, and design parameters of the hypothetical system is addressed.

  9. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  10. Numerical simulation of long-duration blast wave evolution in confined facilities

    NASA Astrophysics Data System (ADS)

    Togashi, F.; Baum, J. D.; Mestreau, E.; Löhner, R.; Sunshine, D.

    2010-10-01

    The objective of this research effort was to investigate the quasi-steady flow field produced by explosives in confined facilities. In this effort we modeled tests in which a high explosive (HE) cylindrical charge was hung in the center of a room and detonated. The HEs used for the tests were C-4 and AFX 757. While C-4 is just slightly under-oxidized and is typically modeled as an ideal explosive, AFX 757 includes a significant percentage of aluminum particles, so long-time afterburning and energy release must be considered. The Lawrence Livermore National Laboratory (LLNL)-produced thermo-chemical equilibrium algorithm, “Cheetah”, was used to estimate the remaining burnable detonation products. From these remaining species, the afterburning energy was computed and added to the flow field. Computations of the detonation and afterburn of two HEs in the confined multi-room facility were performed. The results demonstrate excellent agreement with available experimental data in terms of blast wave time of arrival, peak shock amplitude, reverberation, and total impulse (and hence, total energy release, via either the detonation or afterburn processes.

  11. Integrated modeling of advanced optical systems

    NASA Astrophysics Data System (ADS)

    Briggs, Hugh C.; Needels, Laura; Levine, B. Martin

    1993-02-01

    This poster session paper describes an integrated modeling and analysis capability being developed at JPL under funding provided by the JPL Director's Discretionary Fund and the JPL Control/Structure Interaction Program (CSI). The posters briefly summarize the program capabilities and illustrate them with an example problem. The computer programs developed under this effort will provide an unprecedented capability for integrated modeling and design of high performance optical spacecraft. The engineering disciplines supported include structural dynamics, controls, optics and thermodynamics. Such tools are needed in order to evaluate the end-to-end system performance of spacecraft such as OSI, POINTS, and SMMM. This paper illustrates the proof-of-concept tools that have been developed to establish the technology requirements and demonstrate the new features of integrated modeling and design. The current program also includes implementation of a prototype tool based upon the CAESY environment being developed under the NASA Guidance and Control Research and Technology Computational Controls Program. This prototype will be available late in FY-92. The development plan proposes a major software production effort to fabricate, deliver, support and maintain a national-class tool from FY-93 through FY-95.

  12. A Data-Driven Framework for Incorporating New Tools for ...

    EPA Pesticide Factsheets

    This talk was given during the “Exposure-Based Toxicity Testing” session at the annual meeting of the International Society for Exposure Science. It provided an update on the state of the science and tools that may be employed in risk-based prioritization efforts. It outlined knowledge gained from the data provided using these high-throughput tools to assess chemical bioactivity and to predict chemical exposures and also identified future needs. It provided an opportunity to showcase ongoing research efforts within the National Exposure Research Laboratory and the National Center for Computational Toxicology within the Office of Research and Development to an international audience. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  13. Computational Methods for HSCT-Inlet Controls/CFD Interdisciplinary Research

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Melcher, Kevin J.; Chicatelli, Amy K.; Hartley, Tom T.; Chung, Joongkee

    1994-01-01

    A program aimed at facilitating the use of computational fluid dynamics (CFD) simulations by the controls discipline is presented. The objective is to reduce the development time and cost for propulsion system controls by using CFD simulations to obtain high-fidelity system models for control design and as numerical test beds for control system testing and validation. An interdisciplinary team has been formed to develop analytical and computational tools in three discipline areas: controls, CFD, and computational technology. The controls effort has focused on specifying requirements for an interface between the controls specialist and CFD simulations and a new method for extracting linear, reduced-order control models from CFD simulations. Existing CFD codes are being modified to permit time accurate execution and provide realistic boundary conditions for controls studies. Parallel processing and distributed computing techniques, along with existing system integration software, are being used to reduce CFD execution times and to support the development of an integrated analysis/design system. This paper describes: the initial application for the technology being developed, the high speed civil transport (HSCT) inlet control problem; activities being pursued in each discipline area; and a prototype analysis/design system in place for interactive operation and visualization of a time-accurate HSCT-inlet simulation.

  14. Highly parameterized model calibration with cloud computing: an example of regional flow model calibration in northeast Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.

    2014-05-01

    Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.

  15. Methane Adsorption in Zr-Based MOFs: Comparison and Critical Evaluation of Force Fields

    PubMed Central

    2017-01-01

    The search for nanoporous materials that are highly performing for gas storage and separation is one of the contemporary challenges in material design. The computational tools to aid these experimental efforts are widely available, and adsorption isotherms are routinely computed for huge sets of (hypothetical) frameworks. Clearly the computational results depend on the interactions between the adsorbed species and the adsorbent, which are commonly described using force fields. In this paper, an extensive comparison and in-depth investigation of several force fields from literature is reported for the case of methane adsorption in the Zr-based Metal–Organic Frameworks UiO-66, UiO-67, DUT-52, NU-1000, and MOF-808. Significant quantitative differences in the computed uptake are observed when comparing different force fields, but most qualitative features are common which suggests some predictive power of the simulations when it comes to these properties. More insight into the host–guest interactions is obtained by benchmarking the force fields with an extensive number of ab initio computed single molecule interaction energies. This analysis at the molecular level reveals that especially ab initio derived force fields perform well in reproducing the ab initio interaction energies. Finally, the high sensitivity of uptake predictions on the underlying potential energy surface is explored. PMID:29170687

  16. Information Security: Governmentwide Guidance Needed to Assist Agencies in Implementing Cloud Computing

    DTIC Science & Technology

    2010-07-01

    Cloud computing , an emerging form of computing in which users have access to scalable, on-demand capabilities that are provided through Internet... cloud computing , (2) the information security implications of using cloud computing services in the Federal Government, and (3) federal guidance and...efforts to address information security when using cloud computing . The complete report is titled Information Security: Federal Guidance Needed to

  17. Supersonic reacting internal flowfields

    NASA Astrophysics Data System (ADS)

    Drummond, J. P.

    The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flowfields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.

  18. Computational Modeling of Liquid and Gaseous Control Valves

    NASA Technical Reports Server (NTRS)

    Daines, Russell; Ahuja, Vineet; Hosangadi, Ashvin; Shipman, Jeremy; Moore, Arden; Sulyma, Peter

    2005-01-01

    In this paper computational modeling efforts undertaken at NASA Stennis Space Center in support of rocket engine component testing are discussed. Such analyses include structurally complex cryogenic liquid valves and gas valves operating at high pressures and flow rates. Basic modeling and initial successes are documented, and other issues that make valve modeling at SSC somewhat unique are also addressed. These include transient behavior, valve stall, and the determination of flow patterns in LOX valves. Hexahedral structured grids are used for valves that can be simplifies through the use of axisymmetric approximation. Hybrid unstructured methodology is used for structurally complex valves that have disparate length scales and complex flow paths that include strong swirl, local recirculation zones/secondary flow effects. Hexahedral (structured), unstructured, and hybrid meshes are compared for accuracy and computational efficiency. Accuracy is determined using verification and validation techniques.

  19. Supersonic reacting internal flow fields

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    1989-01-01

    The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flow fields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.

  20. Cloud-based Jupyter Notebooks for Water Data Analysis

    NASA Astrophysics Data System (ADS)

    Castronova, A. M.; Brazil, L.; Seul, M.

    2017-12-01

    The development and adoption of technologies by the water science community to improve our ability to openly collaborate and share workflows will have a transformative impact on how we address the challenges associated with collaborative and reproducible scientific research. Jupyter notebooks offer one solution by providing an open-source platform for creating metadata-rich toolchains for modeling and data analysis applications. Adoption of this technology within the water sciences, coupled with publicly available datasets from agencies such as USGS, NASA, and EPA enables researchers to easily prototype and execute data intensive toolchains. Moreover, implementing this software stack in a cloud-based environment extends its native functionality to provide researchers a mechanism to build and execute toolchains that are too large or computationally demanding for typical desktop computers. Additionally, this cloud-based solution enables scientists to disseminate data processing routines alongside journal publications in an effort to support reproducibility. For example, these data collection and analysis toolchains can be shared, archived, and published using the HydroShare platform or downloaded and executed locally to reproduce scientific analysis. This work presents the design and implementation of a cloud-based Jupyter environment and its application for collecting, aggregating, and munging various datasets in a transparent, sharable, and self-documented manner. The goals of this work are to establish a free and open source platform for domain scientists to (1) conduct data intensive and computationally intensive collaborative research, (2) utilize high performance libraries, models, and routines within a pre-configured cloud environment, and (3) enable dissemination of research products. This presentation will discuss recent efforts towards achieving these goals, and describe the architectural design of the notebook server in an effort to support collaborative and reproducible science.

  1. Computational fluid dynamics modeling of laminar, transitional, and turbulent flows with sensitivity to streamline curvature and rotational effects

    NASA Astrophysics Data System (ADS)

    Chitta, Varun

    Modeling of complex flows involving the combined effects of flow transition and streamline curvature using two advanced turbulence models, one in the Reynolds-averaged Navier-Stokes (RANS) category and the other in the hybrid RANS-Large eddy simulation (LES) category is considered in this research effort. In the first part of the research, a new scalar eddy-viscosity model (EVM) is proposed, designed to exhibit physically correct responses to flow transition, streamline curvature, and system rotation effects. The four equation model developed herein is a curvature-sensitized version of a commercially available three-equation transition-sensitive model. The physical effects of rotation and curvature (RC) enter the model through the added transport equation, analogous to a transverse turbulent velocity scale. The eddy-viscosity has been redefined such that the proposed model is constrained to reduce to the original transition-sensitive model definition in nonrotating flows or in regions with negligible RC effects. In the second part of the research, the developed four-equation model is combined with a LES technique using a new hybrid modeling framework, dynamic hybrid RANS-LES. The new framework is highly generalized, allowing coupling of any desired LES model with any given RANS model and addresses several deficiencies inherent in most current hybrid models. In the present research effort, the DHRL model comprises of the proposed four-equation model for RANS component and the MILES scheme for LES component. Both the models were implemented into a commercial computational fluid dynamics (CFD) solver and tested on a number of engineering and generic flow problems. Results from both the RANS and hybrid models show successful resolution of the combined effects of transition and curvature with reasonable engineering accuracy, and for only a small increase in computational cost. In addition, results from the hybrid model indicate significant levels of turbulent fluctuations in the flowfield, improved accuracy compared to RANS models predictions, and are obtained at a significant reduction of computational cost compared to full LES models. The results suggest that the advanced turbulence modeling techniques presented in this research effort have potential as practical tools for solving low/high Re flows over blunt/curved bodies for the prediction of transition and RC effects.

  2. Quantitative Assessment of Optical Coherence Tomography Imaging Performance with Phantom-Based Test Methods And Computational Modeling

    NASA Astrophysics Data System (ADS)

    Agrawal, Anant

    Optical coherence tomography (OCT) is a powerful medical imaging modality that uniquely produces high-resolution cross-sectional images of tissue using low energy light. Its clinical applications and technological capabilities have grown substantially since its invention about twenty years ago, but efforts have been limited to develop tools to assess performance of OCT devices with respect to the quality and content of acquired images. Such tools are important to ensure information derived from OCT signals and images is accurate and consistent, in order to support further technology development, promote standardization, and benefit public health. The research in this dissertation investigates new physical and computational models which can provide unique insights into specific performance characteristics of OCT devices. Physical models, known as phantoms, are fabricated and evaluated in the interest of establishing standardized test methods to measure several important quantities relevant to image quality. (1) Spatial resolution is measured with a nanoparticle-embedded phantom and model eye which together yield the point spread function under conditions where OCT is commonly used. (2) A multi-layered phantom is constructed to measure the contrast transfer function along the axis of light propagation, relevant for cross-sectional imaging capabilities. (3) Existing and new methods to determine device sensitivity are examined and compared, to better understand the detection limits of OCT. A novel computational model based on the finite-difference time-domain (FDTD) method, which simulates the physics of light behavior at the sub-microscopic level within complex, heterogeneous media, is developed to probe device and tissue characteristics influencing the information content of an OCT image. This model is first tested in simple geometric configurations to understand its accuracy and limitations, then a highly realistic representation of a biological cell, the retinal cone photoreceptor, is created and its resulting OCT signals studied. The phantoms and their associated test methods have successfully yielded novel types of data on the specific performance parameters of interest, which can feed standardization efforts within the OCT community. The level of signal detail provided by the computational model is unprecedented and gives significant insights into the effects of subcellular structures on OCT signals. Together, the outputs of this research effort serve as new tools in the toolkit to examine the intricate details of how and how well OCT devices produce information-rich images of biological tissue.

  3. Web-client based distributed generalization and geoprocessing

    USGS Publications Warehouse

    Wolf, E.B.; Howe, K.

    2009-01-01

    Generalization and geoprocessing operations on geospatial information were once the domain of complex software running on high-performance workstations. Currently, these computationally intensive processes are the domain of desktop applications. Recent efforts have been made to move geoprocessing operations server-side in a distributed, web accessible environment. This paper initiates research into portable client-side generalization and geoprocessing operations as part of a larger effort in user-centered design for the US Geological Survey's The National Map. An implementation of the Ramer-Douglas-Peucker (RDP) line simplification algorithm was created in the open source OpenLayers geoweb client. This algorithm implementation was benchmarked using differing data structures and browser platforms. The implementation and results of the benchmarks are discussed in the general context of client-side geoprocessing. (Abstract).

  4. First principles materials design of novel functional oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, Valentino R.; Voas, Brian K.; Bridges, Craig A.

    2016-05-31

    We review our efforts to develop and implement robust computational approaches for exploring phase stability to facilitate the prediction-to-synthesis process of novel functional oxides. These efforts focus on a synergy between (i) electronic structure calculations for properties predictions, (ii) phenomenological/empirical methods for examining phase stability as related to both phase segregation and temperature-dependent transitions and (iii) experimental validation through synthesis and characterization. We illustrate this philosophy by examining an inaugural study that seeks to discover novel functional oxides with high piezoelectric responses. Lastly, our results show progress towards developing a framework through which solid solutions can be studied to predictmore » materials with enhanced properties that can be synthesized and remain active under device relevant conditions.« less

  5. On the torque and wear behavior of selected thin film MOS2 lubricated gimbal bearings

    NASA Technical Reports Server (NTRS)

    Bohner, John J.; Conley, Peter L.

    1988-01-01

    During the thermal vacuum test phase of the GOES 7 spacecraft, the primary scan mirror system exhibited unacceptably high drive friction. The observed friction was found to correlate with small misalignments in the mirror structure and unavoidable loads induced by the vehicle spin. An intensive effort to understand and document the performance of the scan mirror bearing system under these loads is described. This effort involved calculation of the bearing loads and expected friction torque, comparison of the computed values to test data, and verification of the lubrication system performance and limitations under external loads. The study culminated in a successful system launch in February of 1987. The system has operated as predicted since that time.

  6. Artificial intelligence support for scientific model-building

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1992-01-01

    Scientific model-building can be a time-intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientific development team to understand. We believe that artificial intelligence techniques can facilitate both the model-building and model-sharing process. In this paper, we overview our effort to build a scientific modeling software tool that aids the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high-level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities.

  7. Quantum computation in the analysis of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil

    2004-08-01

    Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.

  8. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE PAGES

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...

    2017-03-20

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  9. Here and now: the intersection of computational science, quantum-mechanical simulations, and materials science

    NASA Astrophysics Data System (ADS)

    Marzari, Nicola

    The last 30 years have seen the steady and exhilarating development of powerful quantum-simulation engines for extended systems, dedicated to the solution of the Kohn-Sham equations of density-functional theory, often augmented by density-functional perturbation theory, many-body perturbation theory, time-dependent density-functional theory, dynamical mean-field theory, and quantum Monte Carlo. Their implementation on massively parallel architectures, now leveraging also GPUs and accelerators, has started a massive effort in the prediction from first principles of many or of complex materials properties, leading the way to the exascale through the combination of HPC (high-performance computing) and HTC (high-throughput computing). Challenges and opportunities abound: complementing hardware and software investments and design; developing the materials' informatics infrastructure needed to encode knowledge into complex protocols and workflows of calculations; managing and curating data; resisting the complacency that we have already reached the predictive accuracy needed for materials design, or a robust level of verification of the different quantum engines. In this talk I will provide an overview of these challenges, with the ultimate prize being the computational understanding, prediction, and design of properties and performance for novel or complex materials and devices.

  10. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  11. Linear and passive silicon diodes, isolators, and logic gates

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Yuan

    2013-12-01

    Silicon photonic integrated devices and circuits have offered a promising means to revolutionalize information processing and computing technologies. One important reason is that these devices are compatible with conventional complementary metal oxide semiconductor (CMOS) processing technology that overwhelms current microelectronics industry. Yet, the dream to build optical computers has yet to come without the breakthrough of several key elements including optical diodes, isolators, and logic gates with low power, high signal contrast, and large bandwidth. Photonic crystal has a great power to mold the flow of light in micrometer/nanometer scale and is a promising platform for optical integration. In this paper we present our recent efforts of design, fabrication, and characterization of ultracompact, linear, passive on-chip optical diodes, isolators and logic gates based on silicon two-dimensional photonic crystal slabs. Both simulation and experiment results show high performance of these novel designed devices. These linear and passive silicon devices have the unique properties of small fingerprint, low power request, large bandwidth, fast response speed, easy for fabrication, and being compatible with COMS technology. Further improving their performance would open up a road towards photonic logics and optical computing and help to construct nanophotonic on-chip processor architectures for future optical computers.

  12. Blast Load Simulator Experiments for Computational Model Validation: Report 1

    DTIC Science & Technology

    2016-08-01

    involving the inclusion of non-responding box-type structures in a BLS simulated blast environment. The BLS is a highly tunable com- pressed-gas-driven...Blast Load Simulator (BLS) to evaluate its suitability for a future effort involving the inclusion of non-responding box-type structures located in...Recommendations Preliminary testing indicated that inclusion of the grill and diaphragm striker resulted in a decrease in peak pressure of about 12

  13. Goals of Government-Funded Public Domain Software Efforts

    PubMed Central

    Rishel, Wesley J.

    1980-01-01

    The development of public domain software under Federal aegis and support has made possible a broadly competitive field of computer - oriented management information system consulting organizations with high technical competence and the potential for strong user orientation and loyalty. The impact of this assumption of major “front-end costs” by the Federal government has additional spin-off effects in terms of standardization and transportability features as well as reduced capital costs to the user.

  14. Pharmacophore modeling, docking, and principal component analysis based clustering: combined computer-assisted approaches to identify new inhibitors of the human rhinovirus coat protein.

    PubMed

    Steindl, Theodora M; Crump, Carolyn E; Hayden, Frederick G; Langer, Thierry

    2005-10-06

    The development and application of a sophisticated virtual screening and selection protocol to identify potential, novel inhibitors of the human rhinovirus coat protein employing various computer-assisted strategies are described. A large commercially available database of compounds was screened using a highly selective, structure-based pharmacophore model generated with the program Catalyst. A docking study and a principal component analysis were carried out within the software package Cerius and served to validate and further refine the obtained results. These combined efforts led to the selection of six candidate structures, for which in vitro anti-rhinoviral activity could be shown in a biological assay.

  15. An Open Source Rapid Computer Aided Control System Design Toolchain Using Scilab, Scicos and RTAI Linux

    NASA Astrophysics Data System (ADS)

    Bouchpan-Lerust-Juéry, L.

    2007-08-01

    Current and next generation on-board computer systems tend to implement real-time embedded control applications (e.g. Attitude and Orbit Control Subsystem (AOCS), Packet Utililization Standard (PUS), spacecraft autonomy . . . ) which must meet high standards of Reliability and Predictability as well as Safety. All these requirements require a considerable amount of effort and cost for Space Sofware Industry. This paper, in a first part, presents a free Open Source integrated solution to develop RTAI applications from analysis, design, simulation and direct implementation using code generation based on Open Source and in its second part summarises this suggested approach, its results and the conclusion for further work.

  16. Identifying relationships between unrelated pharmaceutical target proteins on the basis of shared active compounds.

    PubMed

    Miljković, Filip; Kunimoto, Ryo; Bajorath, Jürgen

    2017-08-01

    Computational exploration of small-molecule-based relationships between target proteins from different families. Target annotations of drugs and other bioactive compounds were systematically analyzed on the basis of high-confidence activity data. A total of 286 novel chemical links were established between distantly related or unrelated target proteins. These relationships involved a total of 1859 bioactive compounds including 147 drugs and 141 targets. Computational analysis of large amounts of compounds and activity data has revealed unexpected relationships between diverse target proteins on the basis of compounds they share. These relationships are relevant for drug discovery efforts. Target pairs that we have identified and associated compound information are made freely available.

  17. Neutron tomography of particulate filters: A non-destructive investigation tool for applied and industrial research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toops, Todd J.; Bilheux, Hassina Z.; Voisin, Sophie

    2013-08-19

    This research describes the development and implementation of high-fidelity neutron imaging and the associated analysis of the images. This advanced capability allows the non-destructive, non-invasive imaging of particulate filters (PFs) and how the deposition of particulate and catalytic washcoat occurs within the filter. The majority of the efforts described here were performed at the High Flux Isotope Reactor (HFIR) CG-1D neutron imaging beamline at Oak Ridge National Laboratory; the current spatial resolution is approximately 50 μm. The sample holder is equipped with a high-precision rotation stage that allows 3D imaging (i.e., computed tomography) of the sample when combined with computerizedmore » reconstruction tools. What enables the neutron-based image is the ability of some elements to absorb or scatter neutrons where other elements allow the neutron to pass through them with negligible interaction. Of particular interest in this study is the scattering of neutrons by hydrogen-containing molecules, such as hydrocarbons (HCs) and/or water, which are adsorbed to the surface of soot, ash and catalytic washcoat. Even so, the interactions with this adsorbed water/HC is low and computational techniques were required to enhance the contrast, primarily a modified simultaneous iterative reconstruction technique (SIRT). Lastly, this effort describes the following systems: particulate randomly distributed in a PF, ash deposition in PFs, a catalyzed washcoat layer in a PF, and three particulate loadings in a SiC PF.« less

  18. Future Computer Requirements for Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  19. Overview 1993: Computational applications

    NASA Technical Reports Server (NTRS)

    Benek, John A.

    1993-01-01

    Computational applications include projects that apply or develop computationally intensive computer programs. Such programs typically require supercomputers to obtain solutions in a timely fashion. This report describes two CSTAR projects involving Computational Fluid Dynamics (CFD) technology. The first, the Parallel Processing Initiative, is a joint development effort and the second, the Chimera Technology Development, is a transfer of government developed technology to American industry.

  20. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  1. Computer Augmented Video Education.

    ERIC Educational Resources Information Center

    Sousa, M. B.

    1979-01-01

    Describes project CAVE (Computer Augmented Video Education), an ongoing effort at the U.S. Naval Academy to present lecture material on videocassette tape, reinforced by drill and practice through an interactive computer system supported by a 12 channel closed circuit television distribution and production facility. (RAO)

  2. Computer Guided Instructional Design.

    ERIC Educational Resources Information Center

    Merrill, M. David; Wood, Larry E.

    1984-01-01

    Describes preliminary efforts to create the Lesson Design System, a computer-guided instructional design system written in Pascal for Apple microcomputers. Its content outline, strategy, display, and online lesson editors correspond roughly to instructional design phases of content and strategy analysis, display creation, and computer programing…

  3. CAROLINA CENTER FOR COMPUTATIONAL TOXICOLOGY

    EPA Science Inventory

    The Center will advance the field of computational toxicology through the development of new methods and tools, as well as through collaborative efforts. In each Project, new computer-based models will be developed and published that represent the state-of-the-art. The tools p...

  4. LOCATE: a mouse protein subcellular localization database

    PubMed Central

    Fink, J. Lynn; Aturaliya, Rajith N.; Davis, Melissa J.; Zhang, Fasheng; Hanson, Kelly; Teasdale, Melvena S.; Kai, Chikatoshi; Kawai, Jun; Carninci, Piero; Hayashizaki, Yoshihide; Teasdale, Rohan D.

    2006-01-01

    We present here LOCATE, a curated, web-accessible database that houses data describing the membrane organization and subcellular localization of proteins from the FANTOM3 Isoform Protein Sequence set. Membrane organization is predicted by the high-throughput, computational pipeline MemO. The subcellular locations of selected proteins from this set were determined by a high-throughput, immunofluorescence-based assay and by manually reviewing >1700 peer-reviewed publications. LOCATE represents the first effort to catalogue the experimentally verified subcellular location and membrane organization of mammalian proteins using a high-throughput approach and provides localization data for ∼40% of the mouse proteome. It is available at . PMID:16381849

  5. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diachin, L F; Garaizar, F X; Henson, V E

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less

  6. Mantle convection on modern supercomputers

    NASA Astrophysics Data System (ADS)

    Weismüller, Jens; Gmeiner, Björn; Mohr, Marcus; Waluga, Christian; Wohlmuth, Barbara; Rüde, Ulrich; Bunge, Hans-Peter

    2015-04-01

    Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures demand an interdisciplinary co-design. Here we report about recent advances of the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups in computer sciences, mathematics and geophysical application under the leadership of FAU Erlangen. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection assessing the impact of small scale processes on global mantle flow.

  7. Overview of NASA/OAST efforts related to manufacturing technology

    NASA Technical Reports Server (NTRS)

    Saunders, N. T.

    1976-01-01

    An overview of some of NASA's current efforts related to manufacturing technology and some possible directions for the future are presented. The topics discussed are: computer-aided design, composite structures, and turbine engine components.

  8. Data Network Weather Service Reporting - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Frey

    2012-08-30

    A final report is made of a three-year effort to develop a new forecasting paradigm for computer network performance. This effort was made in co-ordination with Fermi Lab's construction of e-Weather Center.

  9. Aggregating Data for Computational Toxicology Applications: The U.S. Environmental Protection Agency (EPA) Aggregated Computational Toxicology Resource (ACToR) System

    PubMed Central

    Judson, Richard S.; Martin, Matthew T.; Egeghy, Peter; Gangwal, Sumit; Reif, David M.; Kothiya, Parth; Wolf, Maritja; Cathey, Tommy; Transue, Thomas; Smith, Doris; Vail, James; Frame, Alicia; Mosher, Shad; Cohen Hubal, Elaine A.; Richard, Ann M.

    2012-01-01

    Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues) with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA) has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource) to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains), ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies), ExpoCastDB (detailed human exposure data from observational studies of selected chemicals), and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways). The EPA DSSTox (Distributed Structure-Searchable Toxicity) program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built using open source tools and is freely available to download. This review describes the organization of the data repository and provides selected examples of use cases. PMID:22408426

  10. Aggregating data for computational toxicology applications: The U.S. Environmental Protection Agency (EPA) Aggregated Computational Toxicology Resource (ACToR) System.

    PubMed

    Judson, Richard S; Martin, Matthew T; Egeghy, Peter; Gangwal, Sumit; Reif, David M; Kothiya, Parth; Wolf, Maritja; Cathey, Tommy; Transue, Thomas; Smith, Doris; Vail, James; Frame, Alicia; Mosher, Shad; Cohen Hubal, Elaine A; Richard, Ann M

    2012-01-01

    Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues) with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA) has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource) to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains), ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies), ExpoCastDB (detailed human exposure data from observational studies of selected chemicals), and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways). The EPA DSSTox (Distributed Structure-Searchable Toxicity) program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built using open source tools and is freely available to download. This review describes the organization of the data repository and provides selected examples of use cases.

  11. Implementing Equal Access Computer Labs.

    ERIC Educational Resources Information Center

    Clinton, Janeen; And Others

    This paper discusses the philosophy followed in Palm Beach County to adapt computer literacy curriculum, hardware, and software to meet the needs of all children. The Department of Exceptional Student Education and the Department of Instructional Computing Services cooperated in planning strategies and coordinating efforts to implement equal…

  12. The Effort Paradox: Effort Is Both Costly and Valued.

    PubMed

    Inzlicht, Michael; Shenhav, Amitai; Olivola, Christopher Y

    2018-04-01

    According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort's role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. 76 FR 28443 - President's National Security Telecommunications Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-17

    ... Government's use of cloud computing; the Federal Emergency Management Agency's NS/EP communications... Commercial Satellite Mission Assurance; and the way forward for the committee's cloud computing effort. The...

  14. Home

    Science.gov Websites

    System Award for developing a tool that has had a lasting influence on computing. Project Jupyter evolved lasting influence on computing. Project Jupyter evolved from IPython, an effort pioneered by Fernando PÃ

  15. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2012-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.

  16. NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Lee-Rausch, E. M.

    2012-01-01

    Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.

  17. Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.

    PubMed

    Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene

    2016-11-01

    Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Computer-Aided Nodule Assessment and Risk Yield Risk Management of Adenocarcinoma: The Future of Imaging?

    PubMed

    Foley, Finbar; Rajagopalan, Srinivasan; Raghunath, Sushravya M; Boland, Jennifer M; Karwoski, Ronald A; Maldonado, Fabien; Bartholmai, Brian J; Peikert, Tobias

    2016-01-01

    Increased clinical use of chest high-resolution computed tomography results in increased identification of lung adenocarcinomas and persistent subsolid opacities. However, these lesions range from very indolent to extremely aggressive tumors. Clinically relevant diagnostic tools to noninvasively risk stratify and guide individualized management of these lesions are lacking. Research efforts investigating semiquantitative measures to decrease interrater and intrarater variability are emerging, and in some cases steps have been taken to automate this process. However, many such methods currently are still suboptimal, require validation and are not yet clinically applicable. The computer-aided nodule assessment and risk yield software application represents a validated tool for the automated, quantitative, and noninvasive tool for risk stratification of adenocarcinoma lung nodules. Computer-aided nodule assessment and risk yield correlates well with consensus histology and postsurgical patient outcomes, and therefore may help to guide individualized patient management, for example, in identification of nodules amenable to radiological surveillance, or in need of adjunctive therapy. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Automated Generation of Message-Passing Programs: An Evaluation Using CAPTools

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Jin, Haoqiang; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During that same time period, a steady transition of hardware and system software also occurred, forcing us to expend great efforts into migrating and re-coding our applications. As applications and machine architectures become increasingly complex, the cost and time required for this process will become prohibitive. In this paper, we present the first set of results in our evaluation of interactive parallelization tools. In particular, we evaluate CAPTool's ability to parallelize computational aeroscience applications. CAPTools was tested on serial versions of the NAS Parallel Benchmarks and ARC3D, a computational fluid dynamics application, on two platforms: the SGI Origin 2000 and the Cray T3E. This evaluation includes performance, amount of user interaction required, limitations and portability. Based on these results, a discussion on the feasibility of computer aided parallelization of aerospace applications is presented along with suggestions for future work.

  20. Parallel Navier-Stokes computations on shared and distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Jayasimha, D. N.; Pillay, Sasi Kumar

    1995-01-01

    We study a high order finite difference scheme to solve the time accurate flow field of a jet using the compressible Navier-Stokes equations. As part of our ongoing efforts, we have implemented our numerical model on three parallel computing platforms to study the computational, communication, and scalability characteristics. The platforms chosen for this study are a cluster of workstations connected through fast networks (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), and a distributed memory multiprocessor (the IBM SPI). Our focus in this study is on the LACE testbed. We present some results for the Cray YMP and the IBM SP1 mainly for comparison purposes. On the LACE testbed, we study: (1) the communication characteristics of Ethernet, FDDI, and the ALLNODE networks and (2) the overheads induced by the PVM message passing library used for parallelizing the application. We demonstrate that clustering of workstations is effective and has the potential to be computationally competitive with supercomputers at a fraction of the cost.

  1. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, J.; Herner, K.; Jayatilaka, B.

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less

  3. High pressure jet flame numerical analysis of CO emissions by means of the flamelet generated manifolds technique

    NASA Astrophysics Data System (ADS)

    Donini, A.; Martin, S. M.; Bastiaans, R. J. M.; van Oijen, J. A.; de Goey, L. P. H.

    2013-10-01

    In the present paper a computational analysis of a high pressure confined premixed turbulent methane/air jet flames is presented. In this scope, chemistry is reduced by the use of the Flamelet Generated Manifold method [1] and the fluid flow is modeled in an LES and RANS context. The reaction evolution is described by the reaction progress variable, the heat loss is described by the enthalpy and the turbulence effect on the reaction is represented by the progress variable variance. The interaction between chemistry and turbulence is considered through a presumed probability density function (PDF) approach. The use of FGM as a combustion model shows that combustion features at gas turbine conditions can be satisfactorily reproduced with a reasonable computational effort. Furthermore, the present analysis indicates that the physical and chemical processes controlling carbon monoxide (CO) emissions can be captured only by means of unsteady simulations.

  4. Fully-Coupled Thermo-Electrical Modeling and Simulation of Transition Metal Oxide Memristors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamaluy, Denis; Gao, Xujiao; Tierney, Brian David

    2016-11-01

    Transition metal oxide (TMO) memristors have recently attracted special attention from the semiconductor industry and academia. Memristors are one of the strongest candidates to replace flash memory, and possibly DRAM and SRAM in the near future. Moreover, memristors have a high potential to enable beyond-CMOS technology advances in novel architectures for high performance computing (HPC). The utility of memristors has been demonstrated in reprogrammable logic (cross-bar switches), brain-inspired computing and in non-CMOS complementary logic. Indeed, the potential use of memristors as logic devices is especially important considering the inevitable end of CMOS technology scaling that is anticipated by 2025. Inmore » order to aid the on-going Sandia memristor fabrication effort with a memristor design tool and establish a clear physical picture of resistance switching in TMO memristors, we have created and validated with experimental data a simulation tool we name the Memristor Charge Transport (MCT) Simulator.« less

  5. Integrity management of offshore structures and its implication on computation of structural action effects and resistance

    NASA Astrophysics Data System (ADS)

    Moan, T.

    2017-12-01

    An overview of integrity management of offshore structures, with emphasis on the oil and gas energy sector, is given. Based on relevant accident experiences and means to control the associated risks, accidents are categorized from a technical-physical as well as human and organizational point of view. Structural risk relates to extreme actions as well as structural degradation. Risk mitigation measures, including adequate design criteria, inspection, repair and maintenance as well as quality assurance and control of engineering processes, are briefly outlined. The current status of risk and reliability methodology to aid decisions in the integrity management is briefly reviewed. Finally, the need to balance the uncertainties in data, methods and computational efforts and the cautious use and quality assurance and control in applying high fidelity methods to avoid human errors, is emphasized, and with a plea to develop both high fidelity as well as efficient, simplified methods for design.

  6. Data preservation at the Fermilab Tevatron

    DOE PAGES

    Boyd, J.; Herner, K.; Jayatilaka, B.; ...

    2015-12-23

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less

  7. Progress in high-lift aerodynamic calculations

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1993-01-01

    The current work presents progress in the effort to numerically simulate the flow over high-lift aerodynamic components, namely, multi-element airfoils and wings in either a take-off or a landing configuration. The computational approach utilizes an incompressible flow solver and an overlaid chimera grid approach. A detailed grid resolution study is presented for flow over a three-element airfoil. Two turbulence models, a one-equation Baldwin-Barth model and a two equation k-omega model are compared. Excellent agreement with experiment is obtained for the lift coefficient at all angles of attack, including the prediction of maximum lift when using the two-equation model. Results for two other flap riggings are shown. Three-dimensional results are presented for a wing with a square wing-tip as a validation case. Grid generation and topology is discussed for computing the flow over a T-39 Sabreliner wing with flap deployed and the initial calculations for this geometry are presented.

  8. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    PubMed

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  9. Data preservation at the Fermilab Tevatron

    NASA Astrophysics Data System (ADS)

    Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.

    2015-12-01

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.

  10. Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, J.H.; Michelotti, M.D.; Riemer, N.

    2016-10-01

    Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less

  11. Computationally efficient target classification in multispectral image data with Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca

    2016-10-01

    Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.

  12. Validation of High-Fidelity CFD Simulations for Rocket Injector Design

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Menon, Suresh; Merkle, Charles L.; Oefelein, Joseph C.; Yang, Vigor

    2008-01-01

    Computational fluid dynamics (CFD) has the potential to improve the historical rocket injector design process by evaluating the sensitivity of performance and injector-driven thermal environments to the details of the injector geometry and key operational parameters. Methodical verification and validation efforts on a range of coaxial injector elements have shown the current production CFD capability must be improved in order to quantitatively impact the injector design process. This paper documents the status of a focused effort to compare and understand the predictive capabilities and computational requirements of a range of CFD methodologies on a set of single element injector model problems. The steady Reynolds-Average Navier-Stokes (RANS), unsteady Reynolds-Average Navier-Stokes (URANS) and three different approaches using the Large Eddy Simulation (LES) technique were used to simulate the initial model problem, a single element coaxial injector using gaseous oxygen and gaseous hydrogen propellants. While one high-fidelity LES result matches the experimental combustion chamber wall heat flux very well, there is no monotonic convergence to the data with increasing computational tool fidelity. Systematic evaluation of key flow field regions such as the flame zone, the head end recirculation zone and the downstream near wall zone has shed significant, though as of yet incomplete, light on the complex, underlying causes for the performance level of each technique. 1 Aerospace Engineer and Combustion CFD Team Leader, MS ER42, NASA MSFC, AL 35812, Senior Member, AIAA. 2 Professor and Director, Computational Combustion Laboratory, School of Aerospace Engineering, 270 Ferst Dr., Atlanta, GA 30332, Associate Fellow, AIAA. 3 Reilly Professor of Engineering, School of Mechanical Engineering, 585 Purdue Mall, West Lafayette, IN 47907, Fellow, AIAA. 4 Principal Member of Technical Staff, Combustion Research Facility, 7011 East Avenue, MS9051, Livermore, CA 94550, Associate Fellow, AIAA. 5 J. L. and G. H. McCain Endowed Chair, Mechanical Engineering, 104 Research Building East, University Park, PA 16802, Fellow, AIAA. American Institute of Aeronautics and Astronautics 1

  13. A neuronal model of a global workspace in effortful cognitive tasks.

    PubMed

    Dehaene, S; Kerszberg, M; Changeux, J P

    1998-11-24

    A minimal hypothesis is proposed concerning the brain processes underlying effortful tasks. It distinguishes two main computational spaces: a unique global workspace composed of distributed and heavily interconnected neurons with long-range axons, and a set of specialized and modular perceptual, motor, memory, evaluative, and attentional processors. Workspace neurons are mobilized in effortful tasks for which the specialized processors do not suffice. They selectively mobilize or suppress, through descending connections, the contribution of specific processor neurons. In the course of task performance, workspace neurons become spontaneously coactivated, forming discrete though variable spatio-temporal patterns subject to modulation by vigilance signals and to selection by reward signals. A computer simulation of the Stroop task shows workspace activation to increase during acquisition of a novel task, effortful execution, and after errors. We outline predictions for spatio-temporal activation patterns during brain imaging, particularly about the contribution of dorsolateral prefrontal cortex and anterior cingulate to the workspace.

  14. A specific role for serotonin in overcoming effort cost.

    PubMed

    Meyniel, Florent; Goodwin, Guy M; Deakin, Jf William; Klinge, Corinna; MacFadyen, Christine; Milligan, Holly; Mullings, Emma; Pessiglione, Mathias; Gaillard, Raphaël

    2016-11-08

    Serotonin is implicated in many aspects of behavioral regulation. Theoretical attempts to unify the multiple roles assigned to serotonin proposed that it regulates the impact of costs, such as delay or punishment, on action selection. Here, we show that serotonin also regulates other types of action costs such as effort. We compared behavioral performance in 58 healthy humans treated during 8 weeks with either placebo or the selective serotonin reuptake inhibitor escitalopram. The task involved trading handgrip force production against monetary benefits. Participants in the escitalopram group produced more effort and thereby achieved a higher payoff. Crucially, our computational analysis showed that this effect was underpinned by a specific reduction of effort cost, and not by any change in the weight of monetary incentives. This specific computational effect sheds new light on the physiological role of serotonin in behavioral regulation and on the clinical effect of drugs for depression. ISRCTN75872983.

  15. DARPA-funded efforts in the development of novel brain-computer interface technologies.

    PubMed

    Miranda, Robbin A; Casebeer, William D; Hein, Amy M; Judy, Jack W; Krotkov, Eric P; Laabs, Tracy L; Manzo, Justin E; Pankratz, Kent G; Pratt, Gill A; Sanchez, Justin C; Weber, Douglas J; Wheeler, Tracey L; Ling, Geoffrey S F

    2015-04-15

    The Defense Advanced Research Projects Agency (DARPA) has funded innovative scientific research and technology developments in the field of brain-computer interfaces (BCI) since the 1970s. This review highlights some of DARPA's major advances in the field of BCI, particularly those made in recent years. Two broad categories of DARPA programs are presented with respect to the ultimate goals of supporting the nation's warfighters: (1) BCI efforts aimed at restoring neural and/or behavioral function, and (2) BCI efforts aimed at improving human training and performance. The programs discussed are synergistic and complementary to one another, and, moreover, promote interdisciplinary collaborations among researchers, engineers, and clinicians. Finally, this review includes a summary of some of the remaining challenges for the field of BCI, as well as the goals of new DARPA efforts in this domain. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  16. GPU based 3D feature profile simulation of high-aspect ratio contact hole etch process under fluorocarbon plasmas

    NASA Astrophysics Data System (ADS)

    Chun, Poo-Reum; Lee, Se-Ah; Yook, Yeong-Geun; Choi, Kwang-Sung; Cho, Deog-Geun; Yu, Dong-Hun; Chang, Won-Seok; Kwon, Deuk-Chul; Im, Yeon-Ho

    2013-09-01

    Although plasma etch profile simulation has been attracted much interest for developing reliable plasma etching, there still exist big gaps between current research status and predictable modeling due to the inherent complexity of plasma process. As an effort to address this issue, we present 3D feature profile simulation coupled with well-defined plasma-surface kinetic model for silicon dioxide etching process under fluorocarbon plasmas. To capture the realistic plasma surface reaction behaviors, a polymer layer based surface kinetic model was proposed to consider the simultaneous polymer deposition and oxide etching. Finally, the realistic plasma surface model was used for calculation of speed function for 3D topology simulation, which consists of multiple level set based moving algorithm, and ballistic transport module. In addition, the time consumable computations in the ballistic transport calculation were improved drastically by GPU based numerical computation, leading to the real time computation. Finally, we demonstrated that the surface kinetic model could be coupled successfully for 3D etch profile simulations in high-aspect ratio contact hole plasma etching.

  17. Inlets, ducts, and nozzles

    NASA Technical Reports Server (NTRS)

    Abbott, John M.; Anderson, Bernhard H.; Rice, Edward J.

    1990-01-01

    The internal fluid mechanics research program in inlets, ducts, and nozzles consists of a balanced effort between the development of computational tools (both parabolized Navier-Stokes and full Navier-Stokes) and the conduct of experimental research. The experiments are designed to better understand the fluid flow physics, to develop new or improved flow models, and to provide benchmark quality data sets for validation of the computational methods. The inlet, duct, and nozzle research program is described according to three major classifications of flow phenomena: (1) highly 3-D flow fields; (2) shock-boundary-layer interactions; and (3) shear layer control. Specific examples of current and future elements of the research program are described for each of these phenomenon. In particular, the highly 3-D flow field phenomenon is highlighted by describing the computational and experimental research program in transition ducts having a round-to-rectangular area variation. In the case of shock-boundary-layer interactions, the specific details of research for normal shock-boundary-layer interactions are described. For shear layer control, research in vortex generators and the use of aerodynamic excitation for enhancement of the jet mixing process are described.

  18. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; et al.

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less

  19. 2013 R&D 100 Award: ‘Miniapps’ Bolster High Performance Computing

    ScienceCinema

    Belak, Jim; Richards, David

    2018-06-12

    Two Livermore computer scientists served on a Sandia National Laboratories-led team that developed Mantevo Suite 1.0, the first integrated suite of small software programs, also called "miniapps," to be made available to the high performance computing (HPC) community. These miniapps facilitate the development of new HPC systems and the applications that run on them. Miniapps (miniature applications) serve as stripped down surrogates for complex, full-scale applications that can require a great deal of time and effort to port to a new HPC system because they often consist of hundreds of thousands of lines of code. The miniapps are a prototype that contains some or all of the essentials of the real application but with many fewer lines of code, making the miniapp more versatile for experimentation. This allows researchers to more rapidly explore options and optimize system design, greatly improving the chances the full-scale application will perform successfully. These miniapps have become essential tools for exploring complex design spaces because they can reliably predict the performance of full applications.

  20. Design considerations for computationally constrained two-way real-time video communication

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar M.; Saunders, Steven E.; Ralston, John D.

    2009-08-01

    Today's video codecs have evolved primarily to meet the requirements of the motion picture and broadcast industries, where high-complexity studio encoding can be utilized to create highly-compressed master copies that are then broadcast one-way for playback using less-expensive, lower-complexity consumer devices for decoding and playback. Related standards activities have largely ignored the computational complexity and bandwidth constraints of wireless or Internet based real-time video communications using devices such as cell phones or webcams. Telecommunications industry efforts to develop and standardize video codecs for applications such as video telephony and video conferencing have not yielded image size, quality, and frame-rate performance that match today's consumer expectations and market requirements for Internet and mobile video services. This paper reviews the constraints and the corresponding video codec requirements imposed by real-time, 2-way mobile video applications. Several promising elements of a new mobile video codec architecture are identified, and more comprehensive computational complexity metrics and video quality metrics are proposed in order to support the design, testing, and standardization of these new mobile video codecs.

  1. Simulations of Bluff Body Flow Interaction for Noise Source Modeling

    NASA Technical Reports Server (NTRS)

    Khorrami, Medi R.; Lockard David P.; Choudhari, Meelan M.; Jenkins, Luther N.; Neuhart, Dan H.; McGinley, Catherine B.

    2006-01-01

    The current study is a continuation of our effort to characterize the details of flow interaction between two cylinders in a tandem configuration. This configuration is viewed to possess many of the pertinent flow features of the highly interactive unsteady flow field associated with the main landing gear of large civil transports. The present effort extends our previous two-dimensional, unsteady, Reynolds Averaged Navier-Stokes computations to three dimensions using a quasilaminar, zonal approach, in conjunction with a two-equation turbulence model. Two distinct separation length-to-diameter ratios of L/D = 3.7 and 1.435, representing intermediate and short separation distances between the two cylinders, are simulated. The Mach 0.166 simulations are performed at a Reynolds number of Re = 1.66 105 to match the companion experiments at NASA Langley Research Center. Extensive comparisons with the measured steady and unsteady surface pressure and off-surface particle image velocimetry data show encouraging agreement. Both prominent and some of the more subtle trends in the mean and fluctuating flow fields are correctly predicted. Both computations and the measured data reveal a more robust and energetic shedding process at L/D = 3.7 in comparison with the weaker shedding in the shorter separation case of L/D = 1.435. The vortex shedding frequency based on the computed surface pressure spectra is in reasonable agreement with the measured Strouhal frequency.

  2. Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.

    PubMed

    Cvitaš, Marko T

    2018-03-13

    The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.

  3. Reduction of community alcohol problems: computer simulation experiments in three counties.

    PubMed

    Holder, H D; Blose, J O

    1987-03-01

    A series of alcohol abuse prevention strategies was evaluated using computer simulation for three counties in the United States: Wake County, North Carolina, Washington County, Vermont and Alameda County, California. A system dynamics model composed of a network of interacting variables was developed for the pattern of alcoholic beverage consumption in a community. The relationship of community drinking patterns to various stimulus factors was specified in the model based on available empirical research. Stimulus factors included disposable income, alcoholic beverage prices, advertising exposure, minimum drinking age and changes in cultural norms. After a generic model was developed and validated on the national level, a computer-based system dynamics model was developed for each county, and a series of experiments was conducted to project the potential impact of specific prevention strategies. The project concluded that prevention efforts can both lower current levels of alcohol abuse and reduce projected increases in alcohol-related problems. Without such efforts, already high levels of alcohol-related family disruptions in the three counties could be expected to rise an additional 6% and drinking-related work problems 1-5%, over the next 10 years after controlling for population growth. Of the strategies tested, indexing the price of alcoholic beverages to the consumer price index in conjunction with the implementation of a community educational program with well-defined target audiences has the best potential for significant problem reduction in all three counties.

  4. Utilization of recently developed codes for high power Brayton and Rankine cycle power systems

    NASA Technical Reports Server (NTRS)

    Doherty, Michael P.

    1993-01-01

    Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.

  5. Exploiting volatile opportunistic computing resources with Lobster

    NASA Astrophysics Data System (ADS)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  6. Computer Technology and Social Issues.

    ERIC Educational Resources Information Center

    Garson, G. David

    Computing involves social issues and political choices. Issues such as privacy, computer crime, gender inequity, disemployment, and electronic democracy versus "Big Brother" are addressed in the context of efforts to develop a national public policy for information technology. A broad range of research and case studies are examined in an…

  7. The Design and Implementation of an Operating System for the IBM Personal Computer.

    DTIC Science & Technology

    1984-12-01

    comprehensive study of an actual operating system in an effort to show students how theory has been put into action (Lions, 1978; McCharen, 1980). Another...Freedman, 1977). However, since it is easier to develop and maintain a program written in a high-order language (HOL), Pascal was chosen to be the primary...monolithic monitor approach and the kernel approach are strategies which can be used to structure operating systems ( Deitel , 1983; Holt, 1983

  8. Monte Carlos of the new generation: status and progress

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frixione, Stefano

    2005-03-22

    Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron.

  9. Viscous-flow analysis of a subsonic transport aircraft high-lift system and correlation with flight data

    NASA Technical Reports Server (NTRS)

    Potter, R. C.; Vandam, C. P.

    1995-01-01

    High-lift system aerodynamics has been gaining attention in recent years. In an effort to improve aircraft performance, comprehensive studies of multi-element airfoil systems are being undertaken in wind-tunnel and flight experiments. Recent developments in Computational Fluid Dynamics (CFD) offer a relatively inexpensive alternative for studying complex viscous flows by numerically solving the Navier-Stokes (N-S) equations. Current limitations in computer resources restrict practical high-lift N-S computations to two dimensions, but CFD predictions can yield tremendous insight into flow structure, interactions between airfoil elements, and effects of changes in airfoil geometry or free-stream conditions. These codes are very accurate when compared to strictly 2D data provided by wind-tunnel testing, as will be shown here. Yet, additional challenges must be faced in the analysis of a production aircraft wing section, such as that of the NASA Langley Transport Systems Research Vehicle (TSRV). A primary issue is the sweep theory used to correlate 2D predictions with 3D flight results, accounting for sweep, taper, and finite wing effects. Other computational issues addressed here include the effects of surface roughness of the geometry, cove shape modeling, grid topology, and transition specification. The sensitivity of the flow to changing free-stream conditions is investigated. In addition, the effects of Gurney flaps on the aerodynamic characteristics of the airfoil system are predicted.

  10. Propulsion System Dynamic Modeling of the NASA Supersonic Concept Vehicle for AeroPropulsoServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Seiel, Jonathan

    2016-01-01

    A summary of the propulsion system modeling under NASA's High Speed Project (HSP) AeroPropulsoServoElasticity (APSE) task is provided with a focus on the propulsion system for the low-boom supersonic configuration developed by Lockheed Martin and referred to as the N+2 configuration. This summary includes details on the effort to date to develop computational models for the various propulsion system components. The objective of this paper is to summarize the model development effort in this task, while providing more detail in the modeling areas that have not been previously published. The purpose of the propulsion system modeling and the overall APSE effort is to develop an integrated dynamic vehicle model to conduct appropriate unsteady analysis of supersonic vehicle performance. This integrated APSE system model concept includes the propulsion system model, and the vehicle structural aerodynamics model. The development to date of such a preliminary integrated model will also be summarized in this report

  11. Propulsion System Dynamic Modeling for the NASA Supersonic Concept Vehicle: AeroPropulsoServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph; Seidel, Jonathan

    2014-01-01

    A summary of the propulsion system modeling under NASA's High Speed Project (HSP) AeroPropulsoServoElasticity (APSE) task is provided with a focus on the propulsion system for the low-boom supersonic configuration developed by Lockheed Martin and referred to as the N+2 configuration. This summary includes details on the effort to date to develop computational models for the various propulsion system components. The objective of this paper is to summarize the model development effort in this task, while providing more detail in the modeling areas that have not been previously published. The purpose of the propulsion system modeling and the overall APSE effort is to develop an integrated dynamic vehicle model to conduct appropriate unsteady analysis of supersonic vehicle performance. This integrated APSE system model concept includes the propulsion system model, and the vehicle structural-aerodynamics model. The development to date of such a preliminary integrated model will also be summarized in this report.propulsion system dynamics, the structural dynamics, and aerodynamics.

  12. Propulsion System Dynamic Modeling of the NASA Supersonic Concept Vehicle for AeroPropulsoServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Seidel, Jonathan

    2014-01-01

    A summary of the propulsion system modeling under NASA's High Speed Project (HSP) AeroPropulsoServoElasticity (APSE) task is provided with a focus on the propulsion system for the lowboom supersonic configuration developed by Lockheed Martin and referred to as the N+2 configuration. This summary includes details on the effort to date to develop computational models for the various propulsion system components. The objective of this paper is to summarize the model development effort in this task, while providing more detail in the modeling areas that have not been previously published. The purpose of the propulsion system modeling and the overall APSE effort is to develop an integrated dynamic vehicle model to conduct appropriate unsteady analysis of supersonic vehicle performance. This integrated APSE system model concept includes the propulsion system model, and the vehicle structural-aerodynamics model. The development to date of such a preliminary integrated model will also be summarized in this report.

  13. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, NiAl single crystal turbine blade material; map a simplistic failure strength envelope of the material; develop a statistically based reliability computer algorithm, verify the reliability model and computer algorithm, and model stator vanes for rig tests. Thus establishing design protocols that enable the engineer to analyze and predict the mechanical behavior of ceramic composites and intermetallics would mitigate the prototype (trial and error) approach currently used by the engineering community. The primary objective of the research effort supported by this short term grant is the continued creation of enabling technologies for the macroanalysis of components fabricated from ceramic composites and intermetallic material systems. The creation of enabling technologies aids in shortening the product development cycle of components fabricated from the new high technology materials.

  14. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal, and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineers perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(sub x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for 'graceful' rather than catastrophic failure. When loaded in the fiber direction these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, NiAl single crystal turbine blade material; map a simplistic future strength envelope of the material; develop a statistically based reliability computer algorithm; verify the reliability model and computer algorithm-, and model stator vanes for rig tests. Thus establishing design protocols that enable the engineer to analyze and predict the mechanical behavior of ceramic composites and intermetallics would mitigate the prototype (trial and error) approach currently used by the engineering community. The primary objective of the research effort supported by this short term grant is the continued creation of enabling technologies for the macro-analysis of components fabricated from ceramic composites and intermetallic material systems. The creation of enabling technologies aids in shortening the product development cycle of components fabricated from the new high technology materials.

  15. Multiphysics Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2006-01-01

    The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics methodology. Formulations for heat transfer in solids and porous media were implemented and anchored. A two-pronged approach was employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of hydrogen dissociation and recombination on heat transfer and thrust performance. The formulations and preliminary results on both aspects are presented.

  16. An Integrative and Collaborative Approach to Creating a Diverse and Computationally Competent Geoscience Workforce

    NASA Astrophysics Data System (ADS)

    Moore, S. L.; Kar, A.; Gomez, R.

    2015-12-01

    A partnership between Fort Valley State University (FVSU), the Jackson School of Geosciences at The University of Texas (UT) at Austin, and the Texas Advanced Computing Center (TACC) is engaging computational geoscience faculty and researchers with academically talented underrepresented minority (URM) students, training them to solve grand challenges . These next generation computational geoscientists are being trained to solve some of the world's most challenging geoscience grand challenges requiring data intensive large scale modeling and simulation on high performance computers . UT Austin's geoscience outreach program GeoFORCE, recently awarded the Presidential Award in Excellence in Science, Mathematics and Engineering Mentoring, contributes to the collaborative best practices in engaging researchers with URM students. Collaborative efforts over the past decade are providing data demonstrating that integrative pipeline programs with mentoring and paid internship opportunities, multi-year scholarships, computational training, and communication skills development are having an impact on URMs developing middle skills for geoscience careers. Since 1997, the Cooperative Developmental Energy Program at FVSU and its collaborating universities have graduated 87 engineers, 33 geoscientists, and eight health physicists. Recruited as early as high school, students enroll for three years at FVSU majoring in mathematics, chemistry or biology, and then transfer to UT Austin or other partner institutions to complete a second STEM degree, including geosciences. A partnership with the Integrative Computational Education and Research Traineeship (ICERT), a National Science Foundation (NSF) Research Experience for Undergraduates (REU) Site at TACC provides students with a 10-week summer research experience at UT Austin. Mentored by TACC researchers, students with no previous background in computational science learn to use some of the world's most powerful high performance computing resources to address a grand geosciences problem. Students increase their ability to understand and explain the societal impact of their research and communicate the research to multidisciplinary and lay audiences via near-peer mentoring, poster presentations, and publication opportunities.

  17. A Systematic Approach for Obtaining Performance on Matrix-Like Operations

    NASA Astrophysics Data System (ADS)

    Veras, Richard Michael

    Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.

  18. The Prediction of Drug-Disease Correlation Based on Gene Expression Data.

    PubMed

    Cui, Hui; Zhang, Menghuan; Yang, Qingmin; Li, Xiangyi; Liebman, Michael; Yu, Ying; Xie, Lu

    2018-01-01

    The explosive growth of high-throughput experimental methods and resulting data yields both opportunity and challenge for selecting the correct drug to treat both a specific patient and their individual disease. Ideally, it would be useful and efficient if computational approaches could be applied to help achieve optimal drug-patient-disease matching but current efforts have met with limited success. Current approaches have primarily utilized the measureable effect of a specific drug on target tissue or cell lines to identify the potential biological effect of such treatment. While these efforts have met with some level of success, there exists much opportunity for improvement. This specifically follows the observation that, for many diseases in light of actual patient response, there is increasing need for treatment with combinations of drugs rather than single drug therapies. Only a few previous studies have yielded computational approaches for predicting the synergy of drug combinations by analyzing high-throughput molecular datasets. However, these computational approaches focused on the characteristics of the drug itself, without fully accounting for disease factors. Here, we propose an algorithm to specifically predict synergistic effects of drug combinations on various diseases, by integrating the data characteristics of disease-related gene expression profiles with drug-treated gene expression profiles. We have demonstrated utility through its application to transcriptome data, including microarray and RNASeq data, and the drug-disease prediction results were validated using existing publications and drug databases. It is also applicable to other quantitative profiling data such as proteomics data. We also provide an interactive web interface to allow our Prediction of Drug-Disease method to be readily applied to user data. While our studies represent a preliminary exploration of this critical problem, we believe that the algorithm can provide the basis for further refinement towards addressing a large clinical need.

  19. Perspective: Web-based machine learning models for real-time screening of thermoelectric materials properties

    NASA Astrophysics Data System (ADS)

    Gaultois, Michael W.; Oliynyk, Anton O.; Mar, Arthur; Sparks, Taylor D.; Mulholland, Gregory J.; Meredig, Bryce

    2016-05-01

    The experimental search for new thermoelectric materials remains largely confined to a limited set of successful chemical and structural families, such as chalcogenides, skutterudites, and Zintl phases. In principle, computational tools such as density functional theory (DFT) offer the possibility of rationally guiding experimental synthesis efforts toward very different chemistries. However, in practice, predicting thermoelectric properties from first principles remains a challenging endeavor [J. Carrete et al., Phys. Rev. X 4, 011019 (2014)], and experimental researchers generally do not directly use computation to drive their own synthesis efforts. To bridge this practical gap between experimental needs and computational tools, we report an open machine learning-based recommendation engine (http://thermoelectrics.citrination.com) for materials researchers that suggests promising new thermoelectric compositions based on pre-screening about 25 000 known materials and also evaluates the feasibility of user-designed compounds. We show this engine can identify interesting chemistries very different from known thermoelectrics. Specifically, we describe the experimental characterization of one example set of compounds derived from our engine, RE12Co5Bi (RE = Gd, Er), which exhibits surprising thermoelectric performance given its unprecedentedly high loading with metallic d and f block elements and warrants further investigation as a new thermoelectric material platform. We show that our engine predicts this family of materials to have low thermal and high electrical conductivities, but modest Seebeck coefficient, all of which are confirmed experimentally. We note that the engine also predicts materials that may simultaneously optimize all three properties entering into zT; we selected RE12Co5Bi for this study due to its interesting chemical composition and known facile synthesis.

  20. The DYNES Instrument: A Description and Overview

    NASA Astrophysics Data System (ADS)

    Zurawski, Jason; Ball, Robert; Barczyk, Artur; Binkley, Mathew; Boote, Jeff; Boyd, Eric; Brown, Aaron; Brown, Robert; Lehman, Tom; McKee, Shawn; Meekhof, Benjeman; Mughal, Azher; Newman, Harvey; Rozsa, Sandor; Sheldon, Paul; Tackett, Alan; Voicu, Ramiro; Wolff, Stephen; Yang, Xi

    2012-12-01

    Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (R&E) networking providers, and leads to complications in the overall process of end-to-end data management. Capacity and traffic management are key concerns of R&E network operators; a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.

Top