Sample records for computing modernization program

  1. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  2. Improving robustness and computational efficiency using modern C++

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paterno, M.; Kowalkowski, J.; Green, C.

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In thismore » paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.« less

  3. Articles on Practical Cybernetics. Computer-Developed Computers; Heuristics and Modern Sciences; Linguistics and Practice; Cybernetics and Moral-Ethical Considerations; and Men and Machines at the Chessboard.

    ERIC Educational Resources Information Center

    Berg, A. I.; And Others

    Five articles which were selected from a Russian language book on cybernetics and then translated are presented here. They deal with the topics of: computer-developed computers, heuristics and modern sciences, linguistics and practice, cybernetics and moral-ethical considerations, and computer chess programs. (Author/JY)

  4. Identification of key ancestors of modern germplasm in a breeding program of maize.

    PubMed

    Technow, F; Schrag, T A; Schipprack, W; Melchinger, A E

    2014-12-01

    Probabilities of gene origin computed from the genomic kinships matrix can accurately identify key ancestors of modern germplasms Identifying the key ancestors of modern plant breeding populations can provide valuable insights into the history of a breeding program and provide reference genomes for next generation whole genome sequencing. In an animal breeding context, a method was developed that employs probabilities of gene origin, computed from the pedigree-based additive kinship matrix, for identifying key ancestors. Because reliable and complete pedigree information is often not available in plant breeding, we replaced the additive kinship matrix with the genomic kinship matrix. As a proof-of-concept, we applied this approach to simulated data sets with known ancestries. The relative contribution of the ancestral lines to later generations could be determined with high accuracy, with and without selection. Our method was subsequently used for identifying the key ancestors of the modern Dent germplasm of the public maize breeding program of the University of Hohenheim. We found that the modern germplasm can be traced back to six or seven key ancestors, with one or two of them having a disproportionately large contribution. These results largely corroborated conjectures based on early records of the breeding program. We conclude that probabilities of gene origin computed from the genomic kinships matrix can be used for identifying key ancestors in breeding programs and estimating the proportion of genes contributed by them.

  5. Welding--Trade or Profession?

    ERIC Educational Resources Information Center

    Albright, C. E.; Smith, Kenneth

    2006-01-01

    This article discusses a collaborative program between schools with the purpose of training and providing advanced education in welding. Modern manufacturing is turning to automation to increase productivity, but it can be a great challenge to program robots and other computer-controlled welding and joining systems. Computer programming and…

  6. Computer Technology: State of the Art.

    ERIC Educational Resources Information Center

    Withington, Frederic G.

    1981-01-01

    Describes the nature of modern general-purpose computer systems, including hardware, semiconductor electronics, microprocessors, computer architecture, input output technology, and system control programs. Seven suggested readings are cited. (FM)

  7. Web Based Parallel Programming Workshop for Undergraduate Education.

    ERIC Educational Resources Information Center

    Marcus, Robert L.; Robertson, Douglass

    Central State University (Ohio), under a contract with Nichols Research Corporation, has developed a World Wide web based workshop on high performance computing entitled "IBN SP2 Parallel Programming Workshop." The research is part of the DoD (Department of Defense) High Performance Computing Modernization Program. The research…

  8. The NASA modern technology rotors program

    NASA Technical Reports Server (NTRS)

    Watts, M. E.; Cross, J. L.

    1986-01-01

    Existing data bases regarding helicopters are based on work conducted on 'old-technology' rotor systems. The Modern Technology Rotors (MTR) Program is to provide extensive data bases on rotor systems using present and emerging technology. The MTR is concerned with modern, four-bladed, rotor systems presently being manufactured or under development. Aspects of MTR philosophy are considered along with instrumentation, the MTR test program, the BV 360 Rotor, and the UH-60 Black Hawk. The program phases include computer modelling, shake test, model-scale test, minimally instrumented flight test, extensively pressure-instrumented-blade flight test, and full-scale wind tunnel test.

  9. Overview of the SAMSI year-long program on Statistical, Mathematical and Computational Methods for Astronomy

    NASA Astrophysics Data System (ADS)

    Jogesh Babu, G.

    2017-01-01

    A year-long research (Aug 2016- May 2017) program on `Statistical, Mathematical and Computational Methods for Astronomy (ASTRO)’ is well under way at Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation research institute in Research Triangle Park, NC. This program has brought together astronomers, computer scientists, applied mathematicians and statisticians. The main aims of this program are: to foster cross-disciplinary activities; to accelerate the adoption of modern statistical and mathematical tools into modern astronomy; and to develop new tools needed for important astronomical research problems. The program provides multiple avenues for cross-disciplinary interactions, including several workshops, long-term visitors, and regular teleconferences, so participants can continue collaborations, even if they can only spend limited time in residence at SAMSI. The main program is organized around five working groups:i) Uncertainty Quantification and Astrophysical Emulationii) Synoptic Time Domain Surveysiii) Multivariate and Irregularly Sampled Time Seriesiv) Astrophysical Populationsv) Statistics, computation, and modeling in cosmology.A brief description of each of the work under way by these groups will be given. Overlaps among various working groups will also be highlighted. How the wider astronomy community can both participate and benefit from the activities, will be briefly mentioned.

  10. The Amber Biomolecular Simulation Programs

    PubMed Central

    CASE, DAVID A.; CHEATHAM, THOMAS E.; DARDEN, TOM; GOHLKE, HOLGER; LUO, RAY; MERZ, KENNETH M.; ONUFRIEV, ALEXEY; SIMMERLING, CARLOS; WANG, BING; WOODS, ROBERT J.

    2006-01-01

    We describe the development, current features, and some directions for future development of the Amber package of computer programs. This package evolved from a program that was constructed in the late 1970s to do Assisted Model Building with Energy Refinement, and now contains a group of programs embodying a number of powerful tools of modern computational chemistry, focused on molecular dynamics and free energy calculations of proteins, nucleic acids, and carbohydrates. PMID:16200636

  11. The Importance of Computer Programming Skills to Educational Researchers.

    ERIC Educational Resources Information Center

    Lawson, Stephen

    The use of the modern computer has revolutionized the field of educational research. Software packages are currently available that allow almost anyone to analyze data efficiently and rapidly. Yet, caution must temper the widespread acceptance and use of these programs. It is recommended that the researcher not rely solely on the use of…

  12. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

    EPA Science Inventory

    The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

  13. NASA STI Program Coordinating Council Eleventh Meeting: NASA STI Modernization Plan

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The theme of this NASA Scientific and Technical Information Program Coordinating Council Meeting was the modernization of the STI Program. Topics covered included the activities of the Engineering Review Board in the creation of the Infrastructure Upgrade Plan, the progress of the RECON Replacement Project, the use and status of Electronic SCAN (Selected Current Aerospace Notices), the Machine Translation Project, multimedia, electronic document interchange, the NASA Access Mechanism, computer network upgrades, and standards in the architectural effort.

  14. DoD High Performance Computing Modernization Program Users Group Conference (HPCMP UGC 2011) Held in Portland, Oregon on June 20-23, 2011

    DTIC Science & Technology

    2011-06-01

    4. Conclusion The Web -based AGeS system described in this paper is a computationally-efficient and scalable system for high- throughput genome...method for protecting web services involves making them more resilient to attack using autonomic computing techniques. This paper presents our initial...20–23, 2011 2011 DoD High Performance Computing Modernzation Program Users Group Conference HPCMP UGC 2011 The papers in this book comprise the

  15. Utilizing Modern Technology in Adult and Continuing Education Programs.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Bureau of Curriculum Development.

    This publication, designed as a supplement to the manual entitled "Managing Programs for Adults" (1983), provides guidelines for establishing or expanding the use of video and computers by administration and staff of adult education programs. The first section presents the use of video technology for program promotion, instruction, and staff…

  16. Graphical User Interface Programming in Introductory Computer Science.

    ERIC Educational Resources Information Center

    Skolnick, Michael M.; Spooner, David L.

    Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…

  17. Computer Problem-Solving Coaches for Introductory Physics: Design and Usability Studies

    ERIC Educational Resources Information Center

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-01-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how…

  18. Computer-Aided Instruction in Automated Instrumentation.

    ERIC Educational Resources Information Center

    Stephenson, David T.

    1986-01-01

    Discusses functions of automated instrumentation systems, i.e., systems which combine electrical measuring instruments and a controlling computer to measure responses of a unit under test. The computer-assisted tutorial then described is programmed for use on such a system--a modern microwave spectrum analyzer--to introduce engineering students to…

  19. Journal of Chemical Education: Software.

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 1988

    1988-01-01

    Describes a chemistry software program that emulates a modern binary gradient HPLC system with reversed phase column behavior. Allows for solvent selection, adjustment of gradient program, column selection, detectory selection, handling of computer sample data, and sample preparation. (MVL)

  20. Execution environment for intelligent real-time control systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, Janos

    1987-01-01

    Modern telerobot control technology requires the integration of symbolic and non-symbolic programming techniques, different models of parallel computations, and various programming paradigms. The Multigraph Architecture, which has been developed for the implementation of intelligent real-time control systems is described. The layered architecture includes specific computational models, integrated execution environment and various high-level tools. A special feature of the architecture is the tight coupling between the symbolic and non-symbolic computations. It supports not only a data interface, but also the integration of the control structures in a parallel computing environment.

  1. A Debate over the Teaching of a Legacy Programming Language in an Information Technology (IT) Program

    ERIC Educational Resources Information Center

    Ali, Azad; Smith, David

    2014-01-01

    This paper presents a debate between two faculty members regarding the teaching of the legacy programming course (COBOL) in a Computer Science (CS) program. Among the two faculty members, one calls for the continuation of teaching this language and the other calls for replacing it with another modern language. Although CS programs are notorious…

  2. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  3. When technology became language: the origins of the linguistic conception of computer programming, 1950-1960.

    PubMed

    Nofre, David; Priestley, Mark; Alberts, Gerard

    2014-01-01

    Language is one of the central metaphors around which the discipline of computer science has been built. The language metaphor entered modern computing as part of a cybernetic discourse, but during the second half of the 1950s acquired a more abstract meaning, closely related to the formal languages of logic and linguistics. The article argues that this transformation was related to the appearance of the commercial computer in the mid-1950s. Managers of computing installations and specialists on computer programming in academic computer centers, confronted with an increasing variety of machines, called for the creation of "common" or "universal languages" to enable the migration of computer code from machine to machine. Finally, the article shows how the idea of a universal language was a decisive step in the emergence of programming languages, in the recognition of computer programming as a proper field of knowledge, and eventually in the way we think of the computer.

  4. The potential of computer software that supports the diagnosis of workplace ergonomics in shaping health awareness

    NASA Astrophysics Data System (ADS)

    Lubkowska, Wioletta

    2017-11-01

    The growing prevalence of health problems among computer workstation workers has become one of the biggest threats to the overall health of our population. That is why many modern scientists are looking for ways and methods to prevent and reverse these negative trends. The purpose of this article is to present the potential for practical use of computer programs to design an ergonomic workplace and postural loads. These programs help configure the computer workstation correctly and adopt the correct body position during work, which reduces the risk of health problems. Creating visually attractive programs helps encourage and inspire those who work with a computer to introduce ergonomic solutions and reject the sedentary lifestyle.

  5. Educate at Penn State: Preparing Beginning Teachers with Powerful Digital Tools

    ERIC Educational Resources Information Center

    Murray, Orrin T.; Zembal-Saul, Carla

    2008-01-01

    University based teacher education programs are slowly beginning to catch up to other professional programs that use modern digital tools to prepare students to enter professional fields. This discussion looks at how one teacher education program reached the conclusion that students and faculty would use notebook computers. Frequently referred to…

  6. Preparing Students for Careers in Science and Industry with Computational Physics

    NASA Astrophysics Data System (ADS)

    Florinski, V. A.

    2011-12-01

    Funded by NSF CAREER grant, the University of Alabama (UAH) in Huntsville has launched a new graduate program in Computational Physics. It is universally accepted that today's physics is done on a computer. The program blends the boundary between physics and computer science by teaching student modern, practical techniques of solving difficult physics problems using diverse computational platforms. Currently consisting of two courses first offered in the Fall of 2011, the program will eventually include 5 courses covering methods for fluid dynamics, particle transport via stochastic methods, and hybrid and PIC plasma simulations. The UAH's unique location allows courses to be shaped through discussions with faculty, NASA/MSFC researchers and local R&D business representatives, i.e., potential employers of the program's graduates. Students currently participating in the program have all begun their research careers in space and plasma physics; many are presenting their research at this meeting.

  7. Revisiting Mathematical Problem Solving and Posing in the Digital Era: Toward Pedagogically Sound Uses of Modern Technology

    ERIC Educational Resources Information Center

    Abramovich, S.

    2014-01-01

    The availability of sophisticated computer programs such as "Wolfram Alpha" has made many problems found in the secondary mathematics curriculum somewhat obsolete for they can be easily solved by the software. Against this background, an interplay between the power of a modern tool of technology and educational constraints it presents is…

  8. A computer simulation of the transient response of a 4 cylinder Stirling engine with burner and air preheater in a vehicle

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1981-01-01

    A series of computer programs are presented with full documentation which simulate the transient behavior of a modern 4 cylinder Siemens arrangement Stirling engine with burner and air preheater. Cold start, cranking, idling, acceleration through 3 gear changes and steady speed operation are simulated. Sample results and complete operating instructions are given. A full source code listing of all programs are included.

  9. Virtual memory

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Virtual memory was conceived as a way to automate overlaying of program segments. Modern computers have very large main memories, but need automatic solutions to the relocation and protection problems. Virtual memory serves this need as well and is thus useful in computers of all sizes. The history of the idea is traced, showing how it has become a widespread, little noticed feature of computers today.

  10. "Extreme Programming" in a Bioinformatics Class

    ERIC Educational Resources Information Center

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…

  11. An Introduction to Programming for Bioscientists: A Python-Based Primer

    PubMed Central

    Mura, Cameron

    2016-01-01

    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language’s usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a “variable,” the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences. PMID:27271528

  12. An Introduction to Programming for Bioscientists: A Python-Based Primer.

    PubMed

    Ekmekci, Berk; McAnany, Charles E; Mura, Cameron

    2016-06-01

    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a "variable," the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.

  13. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.

  14. Double degree master program: Optical Design

    NASA Astrophysics Data System (ADS)

    Bakholdin, Alexey; Kujawinska, Malgorzata; Livshits, Irina; Styk, Adam; Voznesenskaya, Anna; Ezhova, Kseniia; Ermolayeva, Elena; Ivanova, Tatiana; Romanova, Galina; Tolstoba, Nadezhda

    2015-10-01

    Modern tendencies of higher education require development of master programs providing achievement of learning outcomes corresponding to quickly variable job market needs. ITMO University represented by Applied and Computer Optics Department and Optical Design and Testing Laboratory jointly with Warsaw University of Technology represented by the Institute of Micromechanics and Photonics at The Faculty of Mechatronics have developed a novel international master double-degree program "Optical Design" accumulating the expertise of both universities including experienced teaching staff, educational technologies, and experimental resources. The program presents studies targeting research and professional activities in high-tech fields connected with optical and optoelectronics devices, optical engineering, numerical methods and computer technologies. This master program deals with the design of optical systems of various types, assemblies and layouts using computer modeling means; investigation of light distribution phenomena; image modeling and formation; development of optical methods for image analysis and optical metrology including optical testing, materials characterization, NDT and industrial control and monitoring. The goal of this program is training a graduate capable to solve a wide range of research and engineering tasks in optical design and metrology leading to modern manufacturing and innovation. Variability of the program structure provides its flexibility and adoption according to current job market demands and personal learning paths for each student. In addition considerable proportion of internship and research expands practical skills. Some special features of the "Optical Design" program which implements the best practices of both Universities, the challenges and lessons learnt during its realization are presented in the paper.

  15. Department of Defense High Performance Computing Modernization Program. 2008 Annual Report

    DTIC Science & Technology

    2009-04-01

    place to another on the network. Without it, a computer could only talk to itself - no email, no web browsing, and no iTunes . Most of the Internet...Your SecurID Card ), Ken Renard Secure Wireless, Rob Scott and Stephen Bowman Securing Today’s Networks, Rich Whittney, Juniper Networks, Federal

  16. Running R Statistical Computing Environment Software on the Peregrine

    Science.gov Websites

    for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing

  17. Numerical Prediction of Pitch Damping Stability Derivatives for Finned Projectiles

    DTIC Science & Technology

    2013-11-01

    in part by a grant of high-performance computing time from the U.S. DOD High Performance Computing Modernization Program (HPCMP) at the Army...to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...12 3.3.2 Time -Accurate Simulations

  18. A modern approach to storing of 3D geometry of objects in machine engineering industry

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Aslanov, G. A.; Sokolov, A. A.

    2017-02-01

    3D graphics is a kind of computer graphics which has absorbed a lot from the vector and raster computer graphics. It is used in interior design projects, architectural projects, advertising, while creating educational computer programs, movies, visual images of parts and products in engineering, etc. 3D computer graphics allows one to create 3D scenes along with simulation of light conditions and setting up standpoints.

  19. New Trends in E-Science: Machine Learning and Knowledge Discovery in Databases

    NASA Astrophysics Data System (ADS)

    Brescia, Massimo

    2012-11-01

    Data mining, or Knowledge Discovery in Databases (KDD), while being the main methodology to extract the scientific information contained in Massive Data Sets (MDS), needs to tackle crucial problems since it has to orchestrate complex challenges posed by transparent access to different computing environments, scalability of algorithms, reusability of resources. To achieve a leap forward for the progress of e-science in the data avalanche era, the community needs to implement an infrastructure capable of performing data access, processing and mining in a distributed but integrated context. The increasing complexity of modern technologies carried out a huge production of data, whose related warehouse management and the need to optimize analysis and mining procedures lead to a change in concept on modern science. Classical data exploration, based on local user own data storage and limited computing infrastructures, is no more efficient in the case of MDS, worldwide spread over inhomogeneous data centres and requiring teraflop processing power. In this context modern experimental and observational science requires a good understanding of computer science, network infrastructures, Data Mining, etc. i.e. of all those techniques which fall into the domain of the so called e-science (recently assessed also by the Fourth Paradigm of Science). Such understanding is almost completely absent in the older generations of scientists and this reflects in the inadequacy of most academic and research programs. A paradigm shift is needed: statistical pattern recognition, object oriented programming, distributed computing, parallel programming need to become an essential part of scientific background. A possible practical solution is to provide the research community with easy-to understand, easy-to-use tools, based on the Web 2.0 technologies and Machine Learning methodology. Tools where almost all the complexity is hidden to the final user, but which are still flexible and able to produce efficient and reliable scientific results. All these considerations will be described in the detail in the chapter. Moreover, examples of modern applications offering to a wide variety of e-science communities a large spectrum of computational facilities to exploit the wealth of available massive data sets and powerful machine learning and statistical algorithms will be also introduced.

  20. Adapting Extension Food Safety Programming for Vegetable Growers to Accommodate Differences in Ethnicity, Farming Scale, and Other Individual Factors

    ERIC Educational Resources Information Center

    Kline, Terence R.; Kneen, Harold; Barrett, Eric; Kleinschmidt, Andy; Doohan, Doug

    2012-01-01

    Differences in vegetable production methods utilized by American growers create distinct challenges for Extension personnel providing food safety training to producer groups. A program employing computers and projectors will not be accepted by an Amish group that does not accept modern technology. We have developed an outreach program that covers…

  1. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  2. From micro to mainframe. A practical approach to perinatal data processing.

    PubMed

    Yeh, S Y; Lincoln, T

    1985-04-01

    A new, practical approach to perinatal data processing for a large obstetric population is described. This was done with a microcomputer for data entry and a mainframe computer for data reduction. The Screen Oriented Data Access (SODA) program was used to generate the data entry form and to input data into the Apple II Plus computer. Data were stored on diskettes and transmitted through a modern and telephone line to the IBM 370/168 computer. The Statistical Analysis System (SAS) program was used for statistical analyses and report generations. This approach was found to be most practical, flexible, and economical.

  3. Using Multimedia for Teaching Analysis in History of Modern Architecture.

    ERIC Educational Resources Information Center

    Perryman, Garry

    This paper presents a case for the development and support of a computer-based interactive multimedia program for teaching analysis in community college architecture design programs. Analysis in architecture design is an extremely important strategy for the teaching of higher-order thinking skills, which senior schools of architecture look for in…

  4. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozacik, Stephen

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  5. Lecture notes in economics and mathematical system. Volume 150: Supercritical wing sections 3

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Garabedian, P.; Korn, D.

    1977-01-01

    Application of computational fluid dynamics to the design and analysis of supercritical wing sections is discussed. Computer programs used to study the flight of modern aircraft at high subsonic speeds are listed and described. The cascades of shockless transonic airfoils that are expected to increase the efficiency of compressors and turbines are included.

  6. MEMORE: An Environment for Data Collection and Analysis on the Use of Computers in Education

    ERIC Educational Resources Information Center

    Goldschmidt, Ronaldo; Fernandes de Souza, Isabel; Norris, Monica; Passos, Claudio; Ferlin, Claudia; Cavalcanti, Maria Claudia; Soares, Jorge

    2016-01-01

    The use of computers as teaching and learning tools plays a particularly important role in modern society. Within this scenario, Brazil launched its own version of the "One Laptop per Child" (OLPC) program, and this initiative, termed PROUCA, has already distributed hundreds of low-cost laptops for educational purposes in many Brazilian…

  7. TOXCAST, A TOOL FOR CATEGORIZATION AND ...

    EPA Pesticide Factsheets

    Across several EPA Program Offices (e.g., OPPTS, OW, OAR), there is a clear need to develop strategies and methods to screen large numbers of chemicals for potential toxicity, and to use the resulting information to prioritize the use of testing resources towards those entities and endpoints that present the greatest likelihood of risk to human health and the environment. This need could be addressed using the experience of the pharmaceutical industry in the use of advanced modern molecular biology and computational chemistry tools for the development of new drugs, with appropriate adjustment to the needs and desires of environmental toxicology. A conceptual approach named ToxCast has been developed to address the needs of EPA Program Offices in the area of prioritization and screening. Modern computational chemistry and molecular biology tools bring enabling technologies forward that can provide information about the physical and biological properties of large numbers of chemicals. The essence of the proposal is to conduct a demonstration project based upon a rich toxicological database (e.g., registered pesticides, or the chemicals tested in the NTP bioassay program), select a fairly large number (50-100 or more chemicals) representative of a number of differing structural classes and phenotypic outcomes (e.g., carcinogens, reproductive toxicants, neurotoxicants), and evaluate them across a broad spectrum of information domains that modern technology has pro

  8. DataForge: Modular platform for data storage and analysis

    NASA Astrophysics Data System (ADS)

    Nozik, Alexander

    2018-04-01

    DataForge is a framework for automated data acquisition, storage and analysis based on modern achievements of applied programming. The aim of the DataForge is to automate some standard tasks like parallel data processing, logging, output sorting and distributed computing. Also the framework extensively uses declarative programming principles via meta-data concept which allows a certain degree of meta-programming and improves results reproducibility.

  9. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.

  10. Application of evolutionary computation in ECAD problems

    NASA Astrophysics Data System (ADS)

    Lee, Dae-Hyun; Hwang, Seung H.

    1998-10-01

    Design of modern electronic system is a complicated task which demands the use of computer- aided design (CAD) tools. Since a lot of problems in ECAD are combinatorial optimization problems, evolutionary computations such as genetic algorithms and evolutionary programming have been widely employed to solve those problems. We have applied evolutionary computation techniques to solve ECAD problems such as technology mapping, microcode-bit optimization, data path ordering and peak power estimation, where their benefits are well observed. This paper presents experiences and discusses issues in those applications.

  11. Education through the prism of computation

    NASA Astrophysics Data System (ADS)

    Kaurov, Vitaliy

    2014-03-01

    With the rapid development of technology, computation claims its irrevocable place among research components of modern science. Thus to foster a successful future scientist, engineer or educator we need to add computation to the foundations of scientific education. We will discuss what type of paradigm shifts it brings to these foundations on the example of Wolfram Science Summer School. It is one of the most advanced computational outreach programs run by Wolfram Foundation, welcoming participants of almost all ages and backgrounds. Centered on complexity science and physics, it also covers numerous adjacent and interdisciplinary fields such as finance, biology, medicine and even music. We will talk about educational and research experiences in this program during the 12 years of its existence. We will review statistics and outputs the program has produced. Among these are interactive electronic publications at the Wolfram Demonstrations Project and contributions to the computational knowledge engine Wolfram|Alpa.

  12. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  13. Preliminary design data package, appendices C1 and C2

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The HYBRID2 program which computes the fuel and energy consumption of a hybrid vehicle with a bi-modal control strategy over specified component driving cycles is described. Fuel and energy consumption are computed separately for the two modes of operation. The program also computes yearly average fuel and energy consumption using a composite driving cycle which varies as a function of daily travel. The modelling techniques are described, and subroutines and their functions are given. The composition of modern automobiles is discussed along with the energy required to manufacture an American automobile. The energy required to scrap and recycle automobiles is also discussed.

  14. Application of modern tools and techniques to maximize engineering productivity in the development of orbital operations plans for the space station progrm

    NASA Technical Reports Server (NTRS)

    Manford, J. S.; Bennett, G. R.

    1985-01-01

    The Space Station Program will incorporate analysis of operations constraints and considerations in the early design phases to avoid the need for later modifications to the Space Station for operations. The application of modern tools and administrative techniques to minimize the cost of performing effective orbital operations planning and design analysis in the preliminary design phase of the Space Station Program is discussed. Tools and techniques discussed include: approach for rigorous analysis of operations functions, use of the resources of a large computer network, and providing for efficient research and access to information.

  15. The modern library: lost and found.

    PubMed Central

    Lindberg, D A

    1996-01-01

    The modern library, a term that was heard frequently in the mid-twentieth century, has fallen into disuse. The over-promotion of computers and all that their enthusiasts promised probably hastened its demise. Today, networking is transforming how libraries provide--and users seek--information. Although the Internet is the natural environment for the health sciences librarian, it is going through growing pains as we face issues of censorship and standards. Today's "modern librarian" must not only be adept at using the Internet but must become familiar with digital information in all its forms--images, full text, and factual data banks. Most important, to stay "modern," today's librarians must embark on a program of lifelong learning that will enable them to make optimum use of the advantages offered by modern technology. PMID:8938334

  16. Interactive Voice/Web Response System in clinical research

    PubMed Central

    Ruikar, Vrishabhsagar

    2016-01-01

    Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc. PMID:26952178

  17. Interactive Voice/Web Response System in clinical research.

    PubMed

    Ruikar, Vrishabhsagar

    2016-01-01

    Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc.

  18. Scalability and Validation of Big Data Bioinformatics Software.

    PubMed

    Yang, Andrian; Troup, Michael; Ho, Joshua W K

    2017-01-01

    This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.

  19. Propulsion system/flight control integration for supersonic aircraft

    NASA Technical Reports Server (NTRS)

    Reukauf, P. J.; Burcham, F. W., Jr.

    1976-01-01

    Digital integrated control systems are studied. Such systems allow minimization of undesirable interactions while maximizing performance at all flight conditions. One such program is the YF-12 cooperative control program. The existing analog air data computer, autothrottle, autopilot, and inlet control systems are converted to digital systems by using a general purpose airborne computer and interface unit. Existing control laws are programed and tested in flight. Integrated control laws, derived using accurate mathematical models of the airplane and propulsion system in conjunction with modern control techniques, are tested in flight. Analysis indicates that an integrated autothrottle autopilot gives good flight path control and that observers are used to replace failed sensors.

  20. A cross-sectional study of the effects of load carriage on running characteristics and tibial mechanical stress: implications for stress fracture injuries in women

    DTIC Science & Technology

    2017-03-23

    performance computing resources made available by the US Department of Defense High Performance Computing Modernization Program at the Air Force...1Department of Defense Biotechnology High Performance Computing Software Applications Institute, Telemedicine and Advanced Technology Research Center, United...States Army Medical Research and Materiel Command, Fort Detrick, Maryland, USA Full list of author information is available at the end of the article

  1. Application of enhanced modern structured analysis techniques to Space Station Freedom electric power system requirements

    NASA Technical Reports Server (NTRS)

    Biernacki, John; Juhasz, John; Sadler, Gerald

    1991-01-01

    A team of Space Station Freedom (SSF) system engineers are in the process of extensive analysis of the SSF requirements, particularly those pertaining to the electrical power system (EPS). The objective of this analysis is the development of a comprehensive, computer-based requirements model, using an enhanced modern structured analysis methodology (EMSA). Such a model provides a detailed and consistent representation of the system's requirements. The process outlined in the EMSA methodology is unique in that it allows the graphical modeling of real-time system state transitions, as well as functional requirements and data relationships, to be implemented using modern computer-based tools. These tools permit flexible updating and continuous maintenance of the models. Initial findings resulting from the application of EMSA to the EPS have benefited the space station program by linking requirements to design, providing traceability of requirements, identifying discrepancies, and fostering an understanding of the EPS.

  2. Enabling GEODSS for Space Situational Awareness (SSA)

    NASA Astrophysics Data System (ADS)

    Wootton, S.

    2016-09-01

    The Ground-Based Electro-Optical Deep Space Surveillance (GEODSS) System has been in operation since the mid-1980's. While GEODSS has been the Space Surveillance Network's (SSN's) workhorse in terms of deep space surveillance, it has not undergone a significant modernization since the 1990's. This means GEODSS continues to operate under a mostly obsolete, legacy data processing baseline. The System Program Office (SPO) responsible for GEODSS, SMC/SYGO, has a number of advanced Space Situational Awareness (SSA)-related efforts in progress, in the form of innovative optical capabilities, data processing algorithms, and hardware upgrades. Each of these efforts is in various stages of evaluation and acquisition. These advanced capabilities rely upon a modern computing environment in which to integrate, but GEODSS does not have one—yet. The SPO is also executing a Service Life Extension Program (SLEP) to modernize the various subsystems within GEODSS, along with a parallel effort to implement a complete, modern software re-architecture. The goal is to use a modern, service-based architecture to provide expedient integration as well as easier and more sustainable expansion. This presentation will describe these modernization efforts in more detail and discuss how adopting such modern paradigms and practices will help ensure the GEODSS system remains relevant and sustainable far beyond 2027.

  3. Manycore Performance-Portability: Kokkos Multidimensional Array Library

    DOE PAGES

    Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...

    2012-01-01

    Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less

  4. DMA Modern Programming Environment Study.

    DTIC Science & Technology

    1980-01-01

    capabilities. The centers are becoming increasingly dependent upon the computer and digital data in the fulfillment of MC&G goals. Successful application...ftticrcsrccessors C140 by Herbert AlteroDigital Citmmuncaticns C141 0 Structuredl Design ’-:orkshocr by Ned Chapin KC 156o Digital Systems En17lrceriirg CC 139 o3...on a programming environment. The study, which resulted in production of a paper entitled An EXEC 8 Programming Support Libary , contends that most of

  5. High Performance Object-Oriented Scientific Programming in Fortran 90

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  6. A programing system for research and applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.

    1981-01-01

    The paper describes a computer programming system designed to be used for methodology research as well as applications in structural optimization. The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities existing in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of contraints and design variables. Features shown in numerical examples include: (1) variability of structural layout and overall shape geometry, (2) static strength and stiffness constraints, (3) local buckling failure, and (4) vibration constraints. The paper concludes with a review of the further development trends of this programing system.

  7. Code Modernization of VPIC

    NASA Astrophysics Data System (ADS)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  8. PT3 Papers. [SITE 2001 Section].

    ERIC Educational Resources Information Center

    Pierson, Melissa, Ed.; Thompson, Mary, Ed.; Adams, Angelle, Ed.; Beyer, Evelyn, Ed.; Cheriyan, Saru, Ed.; Starke, Leslie, Ed.

    This document contains the papers on the PT3 (Preparing Tomorrow's Teachers to use Technology) program from the SITE (Society for Information Technology & Teacher Education) 2001 conference. Topics covered include: modeling instruction with modern information and communications technology; transforming computer coursework for preservice teachers;…

  9. Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)

    2000-01-01

    HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).

  10. [Analysis of single-photon emission computed tomography in patients with hypertensive encephalopathy complicated with previous hypertensive crisis].

    PubMed

    Kustkova, H S

    2012-01-01

    In cerebrovascular diseases pefuzionnaya single photon emission computed tomography with lipophilic amines used for the diagnosis of functional disorders of cerebral blood flow. Quantitative calculations helps clarify the nature of vascular disease and clarify the adequacy and effectiveness of the treatment. In this modern program for SPECT ensure conduct not only as to the calculation of blood flow, but also make it possible to compute also the absolute values of cerebral blood flow.

  11. Heterogeneous computing architecture for fast detection of SNP-SNP interactions.

    PubMed

    Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros

    2014-06-25

    The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.

  12. Heterogeneous computing architecture for fast detection of SNP-SNP interactions

    PubMed Central

    2014-01-01

    Background The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. Results We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. Conclusions General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems. PMID:24964802

  13. Analysis of Satellite Communications Antenna Patterns

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1985-01-01

    Computer program accurately and efficiently predicts far-field patterns of offset, or symmetric, parabolic reflector antennas. Antenna designer uses program to study effects of varying geometrical and electrical (RF) parameters of parabolic reflector and its feed system. Accurate predictions of far-field patterns help designer predict overall performance of antenna. These reflectors used extensively in modern communications satellites and in multiple-beam and low side-lobe antenna systems.

  14. Development of Modern Vocational Objectives for Severly Disabled Homebound Persons: Remote Computer Programming, Microfilm Equipment Operations, and Data Entry Processes: A Final Report.

    ERIC Educational Resources Information Center

    Shworles, Thomas R.

    The project (1968-1973) was undertaken to demonstrate job and earning potential for competitive work of homebound and/or severely disabled persons who otherwise have not benefited from rehabilitation programs as they conventionally exist. In the lifetime of this project, three companies were formed: An non-emergency transportation service for the…

  15. Computer problem-solving coaches for introductory physics: Design and usability studies

    NASA Astrophysics Data System (ADS)

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-06-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how effective such coaches might be, they will only be useful if they are attractive to students. We describe the design and testing of a set of web-based computer programs that act as personal coaches to students while they practice solving problems from introductory physics. The coaches are designed to supplement regular human instruction, giving students access to effective forms of practice outside class. We present results from large-scale usability tests of the computer coaches and discuss their implications for future versions of the coaches.

  16. Department of Defense High Performance Computing Modernization Program. 2007 Annual Report

    DTIC Science & Technology

    2008-03-01

    Directorate, Kirtland AFB, NM Applications of Time-Accurate CFD in Order to Account for Blade -Row Interactions and Distortion Transfer in the Design of...Patterson AFB, OH Direct Numerical Simulations of Active Control for Low- Pressure Turbine Blades Herman Fasel, University of Arizona, Tucson, AZ (Air Force...interactions with the rotor wake . These HI-ARMS computations compare favorably with available wind tunnel test measurements of surface and flowfield

  17. Turbulence Model Effects on Cold-Gas Lateral Jet Interaction in a Supersonic Crossflow

    DTIC Science & Technology

    2014-06-01

    performance computing time from the U.S. Department of Defense (DOD) High Performance Computing Modernization program at the U.S. Army Research Laboratory... time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...thanks Dr. Ross Chaplin , Defence Science Technology Laboratory, United Kingdom (UK), and Dr. David MacManus and Robert Christie, Cranfield University, UK

  18. Applications of genetic programming in cancer research.

    PubMed

    Worzel, William P; Yu, Jianjun; Almal, Arpit A; Chinnaiyan, Arul M

    2009-02-01

    The theory of Darwinian evolution is the fundamental keystones of modern biology. Late in the last century, computer scientists began adapting its principles, in particular natural selection, to complex computational challenges, leading to the emergence of evolutionary algorithms. The conceptual model of selective pressure and recombination in evolutionary algorithms allow scientists to efficiently search high dimensional space for solutions to complex problems. In the last decade, genetic programming has been developed and extensively applied for analysis of molecular data to classify cancer subtypes and characterize the mechanisms of cancer pathogenesis and development. This article reviews current successes using genetic programming and discusses its potential impact in cancer research and treatment in the near future.

  19. In-vehicle Information Systems Behavioral Model and Design Support: IVIS Demand Prototype Software User’s Manual

    DOT National Transportation Integrated Search

    2000-03-06

    The purpose of this research was to develop a behavioral model and prototype computer program for evaluation of modern in-vehicle information systems (IVIS). These systems differ from earlier in-vehicle instruments and displays in that they may requi...

  20. In-Vehicle Information Systems Behavioral Model and Design Support: IVIS Demand Prototype Software User's Manual

    DOT National Transportation Integrated Search

    2000-03-06

    The purpose of this research was to develop a behavioral model and prototype computer program for evaluation of modern in-vehicle information systems (IVIS). These systems differ from earlier in-vehicle instruments and displays in that they may requi...

  1. Using the Principles of BIO2010 to Develop an Introductory, Interdisciplinary Course for Biology Students

    PubMed Central

    Adams, Peter; Goos, Merrilyn

    2010-01-01

    Modern biological sciences require practitioners to have increasing levels of knowledge, competence, and skills in mathematics and programming. A recent review of the science curriculum at the University of Queensland, a large, research-intensive institution in Australia, resulted in the development of a more quantitatively rigorous undergraduate program. Inspired by the National Research Council's BIO2010 report, a new interdisciplinary first-year course (SCIE1000) was created, incorporating mathematics and computer programming in the context of modern science. In this study, the perceptions of biological science students enrolled in SCIE1000 in 2008 and 2009 are measured. Analysis indicates that, as a result of taking SCIE1000, biological science students gained a positive appreciation of the importance of mathematics in their discipline. However, the data revealed that SCIE1000 did not contribute positively to gains in appreciation for computing and only slightly influenced students' motivation to enroll in upper-level quantitative-based courses. Further comparisons between 2008 and 2009 demonstrated the positive effect of using genuine, real-world contexts to enhance student perceptions toward the relevance of mathematics. The results support the recommendation from BIO2010 that mathematics should be introduced to biology students in first-year courses using real-world examples, while challenging the benefits of introducing programming in first-year courses. PMID:20810961

  2. Development of Modern Performance Assessment Tools and Capabilities for Underground Disposal of Transuranic Waste at WIPP

    NASA Astrophysics Data System (ADS)

    Zeitler, T.; Kirchner, T. B.; Hammond, G. E.; Park, H.

    2014-12-01

    The Waste Isolation Pilot Plant (WIPP) has been developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. Containment of TRU waste at the WIPP is regulated by the U.S. Environmental Protection Agency (EPA). The DOE demonstrates compliance with the containment requirements by means of performance assessment (PA) calculations. WIPP PA calculations estimate the probability and consequence of potential radionuclide releases from the repository to the accessible environment for a regulatory period of 10,000 years after facility closure. The long-term performance of the repository is assessed using a suite of sophisticated computational codes. In a broad modernization effort, the DOE has overseen the transfer of these codes to modern hardware and software platforms. Additionally, there is a current effort to establish new performance assessment capabilities through the further development of the PFLOTRAN software, a state-of-the-art massively parallel subsurface flow and reactive transport code. Improvements to the current computational environment will result in greater detail in the final models due to the parallelization afforded by the modern code. Parallelization will allow for relatively faster calculations, as well as a move from a two-dimensional calculation grid to a three-dimensional grid. The result of the modernization effort will be a state-of-the-art subsurface flow and transport capability that will serve WIPP PA into the future. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S Department of Energy.

  3. Effects of Turbulence Model on Prediction of Hot-Gas Lateral Jet Interaction in a Supersonic Crossflow

    DTIC Science & Technology

    2015-07-01

    performance computing time from the US Department of Defense (DOD) High Performance Computing Modernization program at the US Army Research Laboratory...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time ...dimensional, compressible, Reynolds-averaged Navier-Stokes (RANS) equations are solved using a finite volume method. A point-implicit time - integration

  4. QCDLoop: A comprehensive framework for one-loop scalar integrals

    NASA Astrophysics Data System (ADS)

    Carrazza, Stefano; Ellis, R. Keith; Zanderighi, Giulia

    2016-12-01

    We present a new release of the QCDLoop library based on a modern object-oriented framework. We discuss the available new features such as the extension to the complex masses, the possibility to perform computations in double and quadruple precision simultaneously, and useful caching mechanisms to improve the computational speed. We benchmark the performance of the new library, and provide practical examples of phenomenological implementations by interfacing this new library to Monte Carlo programs.

  5. Breast tumor malignancy modelling using evolutionary neural logic networks.

    PubMed

    Tsakonas, Athanasios; Dounias, Georgios; Panagi, Georgia; Panourgias, Evangelia

    2006-01-01

    The present work proposes a computer assisted methodology for the effective modelling of the diagnostic decision for breast tumor malignancy. The suggested approach is based on innovative hybrid computational intelligence algorithms properly applied in related cytological data contained in past medical records. The experimental data used in this study were gathered in the early 1990s in the University of Wisconsin, based in post diagnostic cytological observations performed by expert medical staff. Data were properly encoded in a computer database and accordingly, various alternative modelling techniques were applied on them, in an attempt to form diagnostic models. Previous methods included standard optimisation techniques, as well as artificial intelligence approaches, in a way that a variety of related publications exists in modern literature on the subject. In this report, a hybrid computational intelligence approach is suggested, which effectively combines modern mathematical logic principles, neural computation and genetic programming in an effective manner. The approach proves promising either in terms of diagnostic accuracy and generalization capabilities, or in terms of comprehensibility and practical importance for the related medical staff.

  6. Resolutions of the Coulomb operator: VIII. Parallel implementation using the modern programming language X10.

    PubMed

    Limpanuparb, Taweetham; Milthorpe, Josh; Rendell, Alistair P

    2014-10-30

    Use of the modern parallel programming language X10 for computing long-range Coulomb and exchange interactions is presented. By using X10, a partitioned global address space language with support for task parallelism and the explicit representation of data locality, the resolution of the Ewald operator can be parallelized in a straightforward manner including use of both intranode and internode parallelism. We evaluate four different schemes for dynamic load balancing of integral calculation using X10's work stealing runtime, and report performance results for long-range HF energy calculation of large molecule/high quality basis running on up to 1024 cores of a high performance cluster machine. Copyright © 2014 Wiley Periodicals, Inc.

  7. Robotics for Computer Scientists: What's the Big Idea?

    ERIC Educational Resources Information Center

    Touretzky, David S.

    2013-01-01

    Modern robots, like today's smartphones, are complex devices with intricate software systems. Introductory robot programming courses must evolve to reflect this reality, by teaching students to make use of the sophisticated tools their robots provide rather than reimplementing basic algorithms. This paper focuses on teaching with Tekkotsu, an open…

  8. Time-scheduled delivery of computer health animations: "Installing" healthy habits of computer use.

    PubMed

    Wang, Sy-Chyi; Chern, Jin-Yuan

    2013-06-01

    The development of modern technology brings convenience to our lives but removes physical activity from our daily routines, thereby putting our lives at risk. Extended computer use may contribute to symptoms such as visual impairment and musculoskeletal disorders. To help reduce the risk of physical inactivity and promote healthier computer use, this study developed a time-scheduled delivery of health-related animations for users sitting in front of computers for prolonged periods. In addition, we examined the effects that the program had on the computer-related health behavior intentions and actions of participants. Two waves of questionnaires were implemented for data collection before and after intervention. The results showed that the animation program indeed had a positive effect on participants' healthy computer use actions in terms of breathtaking, body massages, and body stretches. It also helped to bridge the intention-action gap of the health behaviors. The development and evaluation were documented, and users' experiences/suggestions were discussed at the end.

  9. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    PubMed

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  10. A programing system for research and applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.

    1981-01-01

    The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of constraints and design variables. Features shown in numerical examples include: variability of structural layout and overall shape geometry, static strength and stiffness constraints, local buckling failure, and vibration constraints.

  11. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes

  12. FY16 NRL DoD High Performance Computing Modernization Program

    DTIC Science & Technology

    2017-09-15

    explored both wind and wave forcing in the numerical wave tank. The model uses high spatial and temporal resolution and a multi-phase formulation to...Results: The ADVED_NS code was used to predict the effect of the standoff distance between micron- diameter wires and flow frequency on the total...contours for a flow over 3D wire mesh. Figure 2 shows verifications comparing computed and theoretical drag forces for the flow over two cylinders in an

  13. Three-Dimensional Computer Simulation as an Important Competence Based Aspect of a Modern Mining Professional

    NASA Astrophysics Data System (ADS)

    Aksenova, Olesya; Pachkina, Anna

    2017-11-01

    The article deals with the problem of necessity of educational process transformation to meet the requirements of modern miming industry; cooperative developing of new educational programs and implementation of educational process taking into account modern manufacturability. The paper proves the idea of introduction into mining professionals learning process studying of three-dimensional models of surface technological complex, ore reserves and underground digging complex as well as creating these models in different graphic editors and working with the information analysis model obtained on the basis of these three-dimensional models. The technological process of manless coal mining at the premises of the mine Polysaevskaya controlled by the information analysis models built on the basis of three-dimensional models of individual objects and technological process as a whole, and at the same time requiring the staff able to use the programs of three-dimensional positioning in the miners and equipment global frame of reference is covered.

  14. A High School-Collegiate Outreach Program in Chemistry and Biology Delivering Modern Technology in a Mobile Van

    NASA Astrophysics Data System (ADS)

    Craney, Chris; Mazzeo, April; Lord, Kaye

    1996-07-01

    During the past five years the nation's concern for science education has expanded from a discussion about the future supply of Ph.D. scientists and its impact on the nation's scientific competitiveness to the broader consideration of the science education available to all students. Efforts to improve science education have led many authors to suggest greater collaboration between high school science teachers and their college/university colleagues. This article reviews the experience and outcomes of the Teachers + Occidental = Partnership in Science (TOPS) van program operating in the Los Angeles Metropolitan area. The program emphasizes an extensive ongoing staff development, responsiveness to teachers' concerns, technical and on-site support, and sustained interaction between participants and program staff. Access to modern technology, including computer-driven instruments and commercial data analysis software, coupled with increased teacher content knowledge has led to empowerment of teachers and changes in student interest in science. Results of student and teacher questionnaires are reviewed.

  15. The Effectiveness of Screencasts and Cognitive Tools as Scaffolding for Novice Object-Oriented Programmers

    ERIC Educational Resources Information Center

    Lee, Mark J. W.; Pradhan, Sunam; Dalgarno, Barney

    2008-01-01

    Modern information technology and computer science curricula employ a variety of graphical tools and development environments to facilitate student learning of introductory programming concepts and techniques. While the provision of interactive features and the use of visualization can enhance students' understanding and assist them in grasping…

  16. Biomolecules in the Computer: Jmol to the Rescue

    ERIC Educational Resources Information Center

    Herraez, Angel

    2006-01-01

    Jmol is free, open source software for interactive molecular visualization. Since it is written in the Java[TM] programming language, it is compatible with all major operating systems and, in the applet form, with most modern web browsers. This article summarizes Jmol development and features that make it a valid and promising replacement for…

  17. User’s guide to SNAP for ArcGIS® :ArcGIS interface for scheduling and network analysis program

    Treesearch

    Woodam Chung; Dennis Dykstra; Fred Bower; Stephen O’Brien; Richard Abt; John. and Sessions

    2012-01-01

    This document introduces a computer software named SNAP for ArcGIS® , which has been developed to streamline scheduling and transportation planning for timber harvest areas. Using modern optimization techniques, it can be used to spatially schedule timber harvest with consideration of harvesting costs, multiple products, alternative...

  18. Computer aided reliability, availability, and safety modeling for fault-tolerant computer systems with commentary on the HARP program

    NASA Technical Reports Server (NTRS)

    Shooman, Martin L.

    1991-01-01

    Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.

  19. Modernization of the NASA scientific and technical information program

    NASA Technical Reports Server (NTRS)

    Cotter, Gladys A.; Hunter, Judy F.; Ostergaard, K.

    1993-01-01

    The NASA Scientific and Technical Information Program utilizes a technology infrastructure assembled in the mid 1960s to late 1970s to process and disseminate its information products. When this infrastructure was developed it placed NASA as a leader in processing STI. The retrieval engine for the STI database was the first of its kind and was used as the basis for developing commercial, other U.S., and foreign government agency retrieval systems. Due to the combination of changes in user requirements and the tremendous increase in technological capabilities readily available in the marketplace, this infrastructure is no longer the most cost-effective or efficient methodology available. Consequently, the NASA STI Program is pursuing a modernization effort that applies new technology to current processes to provide near-term benefits to the user. In conjunction with this activity, we are developing a long-term modernization strategy designed to transition the Program to a multimedia, global 'library without walls.' Critical pieces of the long-term strategy include streamlining access to sources of STI by using advances in computer networking and graphical user interfaces; creating and disseminating technical information in various electronic media including optical disks, video, and full text; and establishing a Technology Focus Group to maintain a current awareness of emerging technology and to plan for the future.

  20. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  1. Advanced interdisciplinary undergraduate program: light engineering

    NASA Astrophysics Data System (ADS)

    Bakholdin, Alexey; Bougrov, Vladislav; Voznesenskaya, Anna; Ezhova, Kseniia

    2016-09-01

    The undergraduate educational program "Light Engineering" of an advanced level of studies is focused on development of scientific learning outcomes and training of professionals, whose activities are in the interdisciplinary fields of Optical engineering and Technical physics. The program gives practical experience in transmission, reception, storage, processing and displaying information using opto-electronic devices, automation of optical systems design, computer image modeling, automated quality control and characterization of optical devices. The program is implemented in accordance with Educational standards of the ITMO University. The specific features of the Program is practice- and problem-based learning implemented by engaging students to perform research and projects, internships at the enterprises and in leading Russian and international research educational centers. The modular structure of the Program and a significant proportion of variable disciplines provide the concept of individual learning for each student. Learning outcomes of the program's graduates include theoretical knowledge and skills in natural science and core professional disciplines, deep knowledge of modern computer technologies, research expertise, design skills, optical and optoelectronic systems and devices.

  2. Remote control system for high-perfomance computer simulation of crystal growth by the PFC method

    NASA Astrophysics Data System (ADS)

    Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.

  3. FY16 NRL DoD High Performance Computing Modernization Program Annual Reports

    DTIC Science & Technology

    2017-09-15

    explored both wind and wave forcing in the numerical wave tank. The model uses high spatial and temporal resolution and a multi-phase formulation to...Results: The ADVED_NS code was used to predict the effect of the standoff distance between micron- diameter wires and flow frequency on the total...contours for a flow over 3D wire mesh. Figure 2 shows verifications comparing computed and theoretical drag forces for the flow over two cylinders in an

  4. Initial experience with a handheld device digital imaging and communications in medicine viewer: OsiriX mobile on the iPhone.

    PubMed

    Choudhri, Asim F; Radvany, Martin G

    2011-04-01

    Medical imaging is commonly used to diagnose many emergent conditions, as well as plan treatment. Digital images can be reviewed on almost any computing platform. Modern mobile phones and handheld devices are portable computing platforms with robust software programming interfaces, powerful processors, and high-resolution displays. OsiriX mobile, a new Digital Imaging and Communications in Medicine viewing program, is available for the iPhone/iPod touch platform. This raises the possibility of mobile review of diagnostic medical images to expedite diagnosis and treatment planning using a commercial off the shelf solution, facilitating communication among radiologists and referring clinicians.

  5. Improvement and speed optimization of numerical tsunami modelling program using OpenMP technology

    NASA Astrophysics Data System (ADS)

    Chernov, A.; Zaytsev, A.; Yalciner, A.; Kurkin, A.

    2009-04-01

    Currently, the basic problem of tsunami modeling is low speed of calculations which is unacceptable for services of the operative notification. Existing algorithms of numerical modeling of hydrodynamic processes of tsunami waves are developed without taking the opportunities of modern computer facilities. There is an opportunity to have considerable acceleration of process of calculations by using parallel algorithms. We discuss here new approach to parallelization tsunami modeling code using OpenMP Technology (for multiprocessing systems with the general memory). Nowadays, multiprocessing systems are easily accessible for everyone. The cost of the use of such systems becomes much lower comparing to the costs of clusters. This opportunity also benefits all programmers to apply multithreading algorithms on desktop computers of researchers. Other important advantage of the given approach is the mechanism of the general memory - there is no necessity to send data on slow networks (for example Ethernet). All memory is the common for all computing processes; it causes almost linear scalability of the program and processes. In the new version of NAMI DANCE using OpenMP technology and multi-threading algorithm provide 80% gain in speed in comparison with the one-thread version for dual-processor unit. The speed increased and 320% gain was attained for four core processor unit of PCs. Thus, it was possible to reduce considerably time of performance of calculations on the scientific workstations (desktops) without complete change of the program and user interfaces. The further modernization of algorithms of preparation of initial data and processing of results using OpenMP looks reasonable. The final version of NAMI DANCE with the increased computational speed can be used not only for research purposes but also in real time Tsunami Warning Systems.

  6. Designing and Implementing a Computational Methods Course for Upper-level Undergraduates and Postgraduates in Atmospheric and Oceanic Sciences

    NASA Astrophysics Data System (ADS)

    Nelson, E.; L'Ecuyer, T. S.; Douglas, A.; Hansen, Z.

    2017-12-01

    In the modern computing age, scientists must utilize a wide variety of skills to carry out scientific research. Programming, including a focus on collaborative development, has become more prevalent in both academic and professional career paths. Faculty in the Department of Atmospheric and Oceanic Sciences at the University of Wisconsin—Madison recognized this need and recently approved a new course offering for undergraduates and postgraduates in computational methods that was first held in Spring 2017. Three programming languages were covered in the inaugural course semester and development themes such as modularization, data wrangling, and conceptual code models were woven into all of the sections. In this presentation, we will share successes and challenges in developing a research project-focused computational course that leverages hands-on computer laboratory learning and open-sourced course content. Improvements and changes in future iterations of the course based on the first offering will also be discussed.

  7. Extreme Scale Computing to Secure the Nation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D L; McGraw, J R; Johnson, J R

    2009-11-10

    Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national securitymore » requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).« less

  8. An Innovative Plant Genomics and Gene Annotation Program for High School, Community College, and University Faculty

    ERIC Educational Resources Information Center

    Hacisalihoglu, Gokhan; Hilgert, Uwe; Nash, E. Bruce; Micklos, David A.

    2008-01-01

    Today's biology educators face the challenge of training their students in modern molecular biology techniques including genomics and bioinformatics. The Dolan DNA Learning Center (DNALC) of Cold Spring Harbor Laboratory has developed and disseminated a bench- and computer-based plant genomics curriculum for biology faculty. In 2007, a five-day…

  9. Research Experiences for 14 Year Olds: preliminary report on the `Sky Explorer' pilot program at Springfield (MA) High School of Science and Technology

    NASA Astrophysics Data System (ADS)

    Tucker, G. E.

    1997-05-01

    This NSF supported program, emphasizing hands-on learning and observation with modern instruments, is described in its pilot phase, prior to being launched nationally. A group of 14 year old students are using a small (21 cm) computer controlled telescope and CCD camera to do: (1) a 'sky survey' of brighter celestial objects, finding, identifying, and learning about them, and accumulating a portfolio of images, (2) photometry of variable stars, reducing the data to get a light curve, and (3) learn modern computer-based communication/dissemination skills by posting images and data to a Web site they are designing (http://www.javanet.com/ sky) and contributing data to archives (e.g. AAVSO) via the Internet. To attract more interest to astronomy and science in general and have a wider impact on the school and surrounding community, peer teaching is used as a pedagogical technique and families are encouraged to participate. Students teach e.g. astronomy, software and computers, Internet, instrumentation, and observing to other students, parents and the community by means of daytime presentations of their results (images and data) and evening public viewing at the telescope, operating the equipment themselves. Students can contribute scientifically significant data and experience the `discovery' aspect of science through observing projects where a measurement is made. Their `informal education' activities also help improve the perception of science in general and astronomy in particular in society at large. This program could benefit from collaboration with astronomers wanting to organize geographically distributed observing campaigns coordinated over the Internet and willing to advise on promising observational programs for small telescopes in the context of current science.

  10. Stochastic Feedforward Control Technique

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1990-01-01

    Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solis, John Hector

    In this paper, we present a modular framework for constructing a secure and efficient program obfuscation scheme. Our approach, inspired by the obfuscation with respect to oracle machines model of [4], retains an interactive online protocol with an oracle, but relaxes the original computational and storage restrictions. We argue this is reasonable given the computational resources of modern personal devices. Furthermore, we relax the information-theoretic security requirement for computational security to utilize established cryptographic primitives. With this additional flexibility we are free to explore different cryptographic buildingblocks. Our approach combines authenticated encryption with private information retrieval to construct a securemore » program obfuscation framework. We give a formal specification of our framework, based on desired functionality and security properties, and provide an example instantiation. In particular, we implement AES in Galois/Counter Mode for authenticated encryption and the Gentry-Ramzan [13]constant communication-rate private information retrieval scheme. We present our implementation results and show that non-trivial sized programs can be realized, but scalability is quickly limited by computational overhead. Finally, we include a discussion on security considerations when instantiating specific modules.« less

  12. Decision making and problem solving with computer assistance

    NASA Technical Reports Server (NTRS)

    Kraiss, F.

    1980-01-01

    In modern guidance and control systems, the human as manager, supervisor, decision maker, problem solver and trouble shooter, often has to cope with a marginal mental workload. To improve this situation, computers should be used to reduce the operator from mental stress. This should not solely be done by increased automation, but by a reasonable sharing of tasks in a human-computer team, where the computer supports the human intelligence. Recent developments in this area are summarized. It is shown that interactive support of operator by intelligent computer is feasible during information evaluation, decision making and problem solving. The applied artificial intelligence algorithms comprehend pattern recognition and classification, adaptation and machine learning as well as dynamic and heuristic programming. Elementary examples are presented to explain basic principles.

  13. Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 1: Army fault tolerant architecture overview

    NASA Technical Reports Server (NTRS)

    Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.

    1992-01-01

    Digital computing systems needed for Army programs such as the Computer-Aided Low Altitude Helicopter Flight Program and the Armored Systems Modernization (ASM) vehicles may be characterized by high computational throughput and input/output bandwidth, hard real-time response, high reliability and availability, and maintainability, testability, and producibility requirements. In addition, such a system should be affordable to produce, procure, maintain, and upgrade. To address these needs, the Army Fault Tolerant Architecture (AFTA) is being designed and constructed under a three-year program comprised of a conceptual study, detailed design and fabrication, and demonstration and validation phases. Described here are the results of the conceptual study phase of the AFTA development. Given here is an introduction to the AFTA program, its objectives, and key elements of its technical approach. A format is designed for representing mission requirements in a manner suitable for first order AFTA sizing and analysis, followed by a discussion of the current state of mission requirements acquisition for the targeted Army missions. An overview is given of AFTA's architectural theory of operation.

  14. Computer programming for generating visual stimuli.

    PubMed

    Bukhari, Farhan; Kurylo, Daniel D

    2008-02-01

    Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.

  15. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  16. Users manual for the Variable dimension Automatic Synthesis Program (VASP)

    NASA Technical Reports Server (NTRS)

    White, J. S.; Lee, H. Q.

    1971-01-01

    A dictionary and some problems for the Variable Automatic Synthesis Program VASP are submitted. The dictionary contains a description of each subroutine and instructions on its use. The example problems give the user a better perspective on the use of VASP for solving problems in modern control theory. These example problems include dynamic response, optimal control gain, solution of the sampled data matrix Ricatti equation, matrix decomposition, and pseudo inverse of a matrix. Listings of all subroutines are also included. The VASP program has been adapted to run in the conversational mode on the Ames 360/67 computer.

  17. How do particle physicists learn the programming concepts they need?

    NASA Astrophysics Data System (ADS)

    Kluth, S.; Pia, M. G.; Schoerner-Sadenius, T.; Steinbach, P.

    2015-12-01

    The ability to read, use and develop code efficiently and successfully is a key ingredient in modern particle physics. We report the experience of a training program, identified as “Advanced Programming Concepts”, that introduces software concepts, methods and techniques to work effectively on a daily basis in a HEP experiment or other programming intensive fields. This paper illustrates the principles, motivations and methods that shape the “Advanced Computing Concepts” training program, the knowledge base that it conveys, an analysis of the feedback received so far, and the integration of these concepts in the software development process of the experiments as well as its applicability to a wider audience.

  18. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  19. Computational and analytical methods in nonlinear fluid dynamics

    NASA Astrophysics Data System (ADS)

    Walker, James

    1993-09-01

    The central focus of the program was on the application and development of modern analytical and computational methods to the solution of nonlinear problems in fluid dynamics and reactive gas dynamics. The research was carried out within the Division of Engineering Mathematics in the Department of Mechanical Engineering and Mechanics and principally involved Professors P.A. Blythe, E. Varley and J.D.A. Walker. In addition. the program involved various international collaborations. Professor Blythe completed work on reactive gas dynamics with Professor D. Crighton FRS of Cambridge University in the United Kingdom. Professor Walker and his students carried out joint work with Professor F.T. Smith, of University College London, on various problems in unsteady flow and turbulent boundary layers.

  20. Using Words Instead of Jumbled Characters as Stimuli in Keyboard Training Facilitates Fluent Performance

    ERIC Educational Resources Information Center

    DeFulio, Anthony; Crone-Todd, Darlene E.; Long, Lauren V.; Nuzzo, Paul A.; Silverman, Kenneth

    2011-01-01

    Keyboarding skill is an important target for adult education programs due to the ubiquity of computers in modern work environments. A previous study showed that novice typists learned key locations quickly but that fluency took a relatively long time to develop. In the present study, novice typists achieved fluent performance in nearly half the…

  1. Telescience workstation

    NASA Technical Reports Server (NTRS)

    Brown, Robert L.; Doyle, Dee; Haines, Richard F.; Slocum, Michael

    1989-01-01

    As part of the Telescience Testbed Pilot Program, the Universities Space Research Association/ Research Institute for Advanced Computer Science (USRA/RIACS) proposed to support remote communication by providing a network of human/machine interfaces, computer resources, and experimental equipment which allows: remote science, collaboration, technical exchange, and multimedia communication. The telescience workstation is intended to provide a local computing environment for telescience. The purpose of the program are as follows: (1) to provide a suitable environment to integrate existing and new software for a telescience workstation; (2) to provide a suitable environment to develop new software in support of telescience activities; (3) to provide an interoperable environment so that a wide variety of workstations may be used in the telescience program; (4) to provide a supportive infrastructure and a common software base; and (5) to advance, apply, and evaluate the telescience technolgy base. A prototype telescience computing environment designed to bring practicing scientists in domains other than their computer science into a modern style of doing their computing was created and deployed. This environment, the Telescience Windowing Environment, Phase 1 (TeleWEn-1), met some, but not all of the goals stated above. The TeleWEn-1 provided a window-based workstation environment and a set of tools for text editing, document preparation, electronic mail, multimedia mail, raster manipulation, and system management.

  2. Overview of computational structural methods for modern military aircraft

    NASA Technical Reports Server (NTRS)

    Kudva, J. N.

    1992-01-01

    Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

  3. An implementation of the programming structural synthesis system (PROSSS)

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1981-01-01

    A particular implementation of the programming structural synthesis system (PROSSS) is described. This software system combines a state of the art optimization program, a production level structural analysis program, and user supplied, problem dependent interface programs. These programs are combined using standard command language features existing in modern computer operating systems. PROSSS is explained in general with respect to this implementation along with the steps for the preparation of the programs and input data. Each component of the system is described in detail with annotated listings for clarification. The components include options, procedures, programs and subroutines, and data files as they pertain to this implementation. An example exercising each option in this implementation to allow the user to anticipate the type of results that might be expected is presented.

  4. Computer software.

    PubMed

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  5. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales

    DOE PAGES

    Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander; ...

    2015-03-20

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less

  6. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales.

    PubMed

    Riccardi, Demian; Parks, Jerry M; Johs, Alexander; Smith, Jeremy C

    2015-04-27

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. The core is well-tested, well-documented, and easy to install across computational platforms. The goal of the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, an abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.

  7. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less

  8. Stopping computer crimes

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Two new books about intrusions and computer viruses remind us that attacks against our computers on networks are the actions of human beings. Cliff Stoll's book about the hacker who spent a year, beginning in Aug. 1986, attempting to use the Lawrence Berkeley Computer as a stepping-stone for access to military secrets is a spy thriller that illustrates the weaknesses of our password systems and the difficulties in compiling evidence against a hacker engaged in espionage. Pamela Kane's book about viruses that attack IBM PC's shows that viruses are the modern version of the old problem of a Trojan horse attack. It discusses the most famous viruses and their countermeasures, and it comes with a floppy disk of utility programs that will disinfect your PC and thwart future attack.

  9. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  10. Safety Engineering and Protective Technology in Support of Army Modernization Programs--Picatinny Arsenal Papers Presented at the 16th Annual Explosive Safety Seminar

    DTIC Science & Technology

    1975-08-01

    beams with diagonal bracing. The siding and roofing were constructed of corrugated aluminum panels connected to the girts and purlins by 1/4- inch...Fig 8) . Also con- tained in this report are static and dynamic properties of steel columns and beams ; as well as recommended types of steels and...both the beams and the columns . 3. The interaction between axial loads and displacements. The computer program input data includes the modulus of

  11. Representation-Independent Iteration of Sparse Data Arrays

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.

  12. FY07 NRL DoD High Performance Computing Modernization Program Annual Reports

    DTIC Science & Technology

    2008-09-05

    performed. Implicit and explicit solutions methods are used as appropriate. The primary finite element codes used are ABAQUS and ANSYS. User subroutines ...geometric complexities, loading path dependence, rate dependence, and interaction between loading types (electrical, thermal and mechanical). Work is not...are used for specialized material constitutive response. Coupled material responses, such as electrical- thermal for capacitor materials or electrical

  13. Department of Defense High Performance Computing Modernization Program. 2006 Annual Report

    DTIC Science & Technology

    2007-03-01

    Department. We successfully completed several software development projects that introduced parallel, scalable production software now in use across the...imagined. They are developing and deploying weather and ocean models that allow our soldiers, sailors, marines and airmen to plan missions more effectively...and to navigate adverse environments safely. They are modeling molecular interactions leading to the development of higher energy fuels, munitions

  14. An Educational Exercise Examining the Role of Model Attributes on the Creation and Alteration of CAD Models

    ERIC Educational Resources Information Center

    Johnson, Michael D.; Diwakaran, Ram Prasad

    2011-01-01

    Computer-aided design (CAD) is a ubiquitous tool that today's students will be expected to use proficiently for numerous engineering purposes. Taking full advantage of the features available in modern CAD programs requires that models are created in a manner that allows others to easily understand how they are organized and alter them in an…

  15. Language-Agnostic Reproducible Data Analysis Using Literate Programming.

    PubMed

    Vassilev, Boris; Louhimo, Riku; Ikonen, Elina; Hautaniemi, Sampsa

    2016-01-01

    A modern biomedical research project can easily contain hundreds of analysis steps and lack of reproducibility of the analyses has been recognized as a severe issue. While thorough documentation enables reproducibility, the number of analysis programs used can be so large that in reality reproducibility cannot be easily achieved. Literate programming is an approach to present computer programs to human readers. The code is rearranged to follow the logic of the program, and to explain that logic in a natural language. The code executed by the computer is extracted from the literate source code. As such, literate programming is an ideal formalism for systematizing analysis steps in biomedical research. We have developed the reproducible computing tool Lir (literate, reproducible computing) that allows a tool-agnostic approach to biomedical data analysis. We demonstrate the utility of Lir by applying it to a case study. Our aim was to investigate the role of endosomal trafficking regulators to the progression of breast cancer. In this analysis, a variety of tools were combined to interpret the available data: a relational database, standard command-line tools, and a statistical computing environment. The analysis revealed that the lipid transport related genes LAPTM4B and NDRG1 are coamplified in breast cancer patients, and identified genes potentially cooperating with LAPTM4B in breast cancer progression. Our case study demonstrates that with Lir, an array of tools can be combined in the same data analysis to improve efficiency, reproducibility, and ease of understanding. Lir is an open-source software available at github.com/borisvassilev/lir.

  16. Language-Agnostic Reproducible Data Analysis Using Literate Programming

    PubMed Central

    Vassilev, Boris; Louhimo, Riku; Ikonen, Elina; Hautaniemi, Sampsa

    2016-01-01

    A modern biomedical research project can easily contain hundreds of analysis steps and lack of reproducibility of the analyses has been recognized as a severe issue. While thorough documentation enables reproducibility, the number of analysis programs used can be so large that in reality reproducibility cannot be easily achieved. Literate programming is an approach to present computer programs to human readers. The code is rearranged to follow the logic of the program, and to explain that logic in a natural language. The code executed by the computer is extracted from the literate source code. As such, literate programming is an ideal formalism for systematizing analysis steps in biomedical research. We have developed the reproducible computing tool Lir (literate, reproducible computing) that allows a tool-agnostic approach to biomedical data analysis. We demonstrate the utility of Lir by applying it to a case study. Our aim was to investigate the role of endosomal trafficking regulators to the progression of breast cancer. In this analysis, a variety of tools were combined to interpret the available data: a relational database, standard command-line tools, and a statistical computing environment. The analysis revealed that the lipid transport related genes LAPTM4B and NDRG1 are coamplified in breast cancer patients, and identified genes potentially cooperating with LAPTM4B in breast cancer progression. Our case study demonstrates that with Lir, an array of tools can be combined in the same data analysis to improve efficiency, reproducibility, and ease of understanding. Lir is an open-source software available at github.com/borisvassilev/lir. PMID:27711123

  17. Genetic Parallel Programming: design and implementation.

    PubMed

    Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong

    2006-01-01

    This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.

  18. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  19. A High School Level Course On Robot Design And Construction

    NASA Astrophysics Data System (ADS)

    Sadler, Paul M.; Crandall, Jack L.

    1984-02-01

    The Robotics Design and Construction Class at Sehome High School was developed to offer gifted and/or highly motivated students an in-depth introduction to a modern engineering topic. The course includes instruction in basic electronics, digital and radio electronics, construction skills, robotics literacy, construction of the HERO 1 Heathkit Robot, computer/ robot programming, and voice synthesis. A key element which leads to the success of the course is the involvement of various community assets including manpower and financial assistance. The instructors included a physics/electronics teacher, a computer science teacher, two retired engineers, and an electronics technician.

  20. A new Fortran 90 program to compute regular and irregular associated Legendre functions (new version announcement)

    NASA Astrophysics Data System (ADS)

    Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus

    2018-04-01

    This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.

  1. CHEETAH: A fast thermochemical code for detonation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L.E.

    1993-11-01

    For more than 20 years, TIGER has been the benchmark thermochemical code in the energetic materials community. TIGER has been widely used because it gives good detonation parameters in a very short period of time. Despite its success, TIGER is beginning to show its age. The program`s chemical equilibrium solver frequently crashes, especially when dealing with many chemical species. It often fails to find the C-J point. Finally, there are many inconveniences for the user stemming from the programs roots in pre-modern FORTRAN. These inconveniences often lead to mistakes in preparing input files and thus erroneous results. We are producingmore » a modern version of TIGER, which combines the best features of the old program with new capabilities, better computational algorithms, and improved packaging. The new code, which will evolve out of TIGER in the next few years, will be called ``CHEETAH.`` Many of the capabilities that will be put into CHEETAH are inspired by the thermochemical code CHEQ. The new capabilities of CHEETAH are: calculate trace levels of chemical compounds for environmental analysis; kinetics capability: CHEETAH will predict chemical compositions as a function of time given individual chemical reaction rates. Initial application: carbon condensation; CHEETAH will incorporate partial reactions; CHEETAH will be based on computer-optimized JCZ3 and BKW parameters. These parameters will be fit to over 20 years of data collected at LLNL. We will run CHEETAH thousands of times to determine the best possible parameter sets; CHEETAH will fit C-J data to JWL`s,and also predict full-wall and half-wall cylinder velocities.« less

  2. Integrating computers in physics teaching: An Indian perspective

    NASA Astrophysics Data System (ADS)

    Jolly, Pratibha

    1997-03-01

    The University of Delhi has around twenty affiliated undergraduate colleges that offer a three-year physics major program to nearly five hundred students. All follow a common curriculum and submit to a centralized examination. This structure of tertiary education makes it relatively difficult to implement radical or rapid changes in the formal curriculum. The technology onslaught has, at last, irrevocably altered this; computers are carving new windows in old citadels and defining the agenda in teaching-learning environments the world over. In 1992, we formally introduced Computational Physics as a core paper in the second year of the Bachelor's program. As yet, the emphasis is on imparting familiarity with computers, a programming language and rudiments of numerical algorithms. In a parallel development, we also introduced a strong component of instrumentation with modern day electronic devices, including microprocessors. Many of us, however, would like to see not just computer presence in our curriculum but a totally new curriculum and teaching strategy that exploits, befittingly, the new technology. The current challenge is to realize in practice the full potential of the computer as the proverbial versatile tool: interfacing laboratory experiments for real-time acquisition and control of data; enabling rigorous analysis and data modeling; simulating micro-worlds and real life phenomena; establishing new cognitive linkages between theory and empirical observation; and between abstract constructs and visual representations.

  3. High-Productivity Computing in Computational Physics Education

    NASA Astrophysics Data System (ADS)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  4. The Development of Sociocultural Competence with the Help of Computer Technology

    ERIC Educational Resources Information Center

    Rakhimova, Alina E.; Yashina, Marianna E.; Mukhamadiarova, Albina F.; Sharipova, Astrid V.

    2017-01-01

    The article deals with the description of the process of development sociocultural knowledge and competences using computer technologies. On the whole the development of modern computer technologies allows teachers to broaden trainees' sociocultural outlook and trace their progress online. Observation of modern computer technologies and estimation…

  5. Right Size Determining the Staff Necessary to Sustain Simulation and Computing Capabilities for Nuclear Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikkel, Daniel J.; Meisner, Robert

    The Advanced Simulation and Computing Campaign, herein referred to as the ASC Program, is a core element of the science-based Stockpile Stewardship Program (SSP), which enables assessment, certification, and maintenance of the safety, security, and reliability of the U.S. nuclear stockpile without the need to resume nuclear testing. The use of advanced parallel computing has transitioned from proof-of-principle to become a critical element for assessing and certifying the stockpile. As the initiative phase of the ASC Program came to an end in the mid-2000s, the National Nuclear Security Administration redirected resources to other urgent priorities, and resulting staff reductions inmore » ASC occurred without the benefit of analysis of the impact on modern stockpile stewardship that is dependent on these new simulation capabilities. Consequently, in mid-2008 the ASC Program management commissioned a study to estimate the essential size and balance needed to sustain advanced simulation as a core component of stockpile stewardship. The ASC Program requires a minimum base staff size of 930 (which includes the number of staff necessary to maintain critical technical disciplines as well as to execute required programmatic tasks) to sustain its essential ongoing role in stockpile stewardship.« less

  6. Database Design and Management in Engineering Optimization.

    DTIC Science & Technology

    1988-02-01

    scientific and engineer- Q.- ’ method In the mid-19SOs along with modern digital com- ing applications. The paper highlights the difference puters, have made...is continuously tion software can call standard subroutines from the DBMS redefined in an application program, DDL must have j libary to define...operations. .. " type data usually encountered in engineering applications. GFDGT: Computes the number of digits needed to display " "’ A user

  7. Column Subset Selection, Matrix Factorization, and Eigenvalue Optimization

    DTIC Science & Technology

    2008-07-01

    Pietsch and Grothendieck, which are regarded as basic instruments in modern functional analysis [Pis86]. • The methods for computing these... Pietsch factorization and the maxcut semi- definite program [GW95]. 1.2. Overview. We focus on the algorithmic version of the Kashin–Tzafriri theorem...will see that the desired subset is exposed by factoring the random submatrix. This factorization, which was invented by Pietsch , is regarded as a basic

  8. The science in social science

    PubMed Central

    Bernard, H. Russell

    2012-01-01

    A recent poll showed that most people think of science as technology and engineering—life-saving drugs, computers, space exploration, and so on. This was, in fact, the promise of the founders of modern science in the 17th century. It is less commonly understood that social and behavioral sciences have also produced technologies and engineering that dominate our everyday lives. These include polling, marketing, management, insurance, and public health programs. PMID:23213222

  9. FY06 NRL DoD High Performance Computing Modernization Program Annual Reports

    DTIC Science & Technology

    2007-10-31

    our simulations yield important new information on the amount and form of the energy that is released by these explosive events. These results...coupled with the ideal-gas equation of state and a one-step Arrhenuis kinetics of energy release. The equations are solved using the explicit...practical applications, including hydrogen safety and pulse -detonation engines (PDE). For example, the results summarizing the effect of obstacle

  10. Design and Implementation of a Modern Automatic Deformation Monitoring System

    NASA Astrophysics Data System (ADS)

    Engel, Philipp; Schweimler, Björn

    2016-03-01

    The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the University of Applied Sciences in Neubrandenburg (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.

  11. The Research on Application of Information Technology in sports Stadiums

    NASA Astrophysics Data System (ADS)

    Can, Han; Lu, Ma; Gan, Luying

    With the Olympic glory in the national fitness program planning and the smooth development of China, the public's concern for the sport continues to grow, while their physical health is also increasingly fervent desired, the country launched a modern technological construction of sports facilities. Information technology applications in the sports venues in the increasingly wide range of modern venues and facilities, including not only the intelligent application of office automation systems, intelligent systems and sports facilities, communication systems for event management, ticket access control system, contest information systems, television systems, Command and Control System, but also in action including the use of computer technology, image analysis, computer-aided training athletes, sports training system and related data entry systems, decision support systems.Using documentary data method, this paper focuses on the research on application of information technology in Sports Stadiums, and try to explore its future trends.With a view to promote the growth of China's national economyand,so as to improve the students'quality and promote the cause of Chinese sports.

  12. Accessible Earth: Enhancing diversity in the Geosciences through accessible course design and Experiential Learning Theory

    NASA Astrophysics Data System (ADS)

    Bennett, Rick; Lamb, Diedre

    2017-04-01

    The tradition of field-based instruction in the geoscience curriculum, which culminates in a capstone geological field camp, presents an insurmountable barrier to many disabled students who might otherwise choose to pursue geoscience careers. There is a widespread perception that success as a practicing geoscientist requires direct access to outcrops and vantage points available only to those able to traverse inaccessible terrain. Yet many modern geoscience activities are based on remotely sensed geophysical data, data analysis, and computation that take place entirely from within the laboratory. To challenge the perception of geoscience as a career option only for the able bodied, we have created the capstone Accessible Earth Study Abroad Program, an alternative to geologic field camp with a focus on modern geophysical observation systems, computational thinking, and data science. In this presentation, we will report on the theoretical bases for developing the course, our experiences in teaching the course to date, and our plan for ongoing assessment, refinement, and dissemination of the effectiveness of our efforts.

  13. Nuclear weapons modernization: Plans, programs, and issues for Congress

    NASA Astrophysics Data System (ADS)

    Woolf, Amy F.

    2017-11-01

    The United States is currently recapitalizing each delivery system in its "nuclear triad" and refurbishing many of the warheads carried by those systems. The plans for these modernization programs have raised a number of questions, both within Congress and among analysts in the nuclear weapons and arms control communities, about the costs associated with the programs and the need to recapitalize each leg of the triad at the same time. This paper covers four distinct issues. It begins with a brief review of the planned modernization programs, then addresses questions about why the United States is pursuing all of these modernization programs at this time. It then reviews the debate about how much these modernization programs are likely to cost in the next decade and considers possible changes that might reduce the cost. It concludes with some comments about congressional views on the modernization programs and prospects for continuing congressional support in the coming years.

  14. Mass Media Decision in China's Post-Mao Zedong Modernization Program: Some Unanticipated Consequences.

    ERIC Educational Resources Information Center

    Koo, Charles M.

    In 1978, China launched its "Four Modernizations" program, which included modernization in agriculture, industry, national defense, and science and technology. To promote this program and to mobilize the Chinese masses to take a more positive and active attitude toward modernization, the government called upon the forces of the mass…

  15. Personal computer security: part 1. Firewalls, antivirus software, and Internet security suites.

    PubMed

    Caruso, Ronald D

    2003-01-01

    Personal computer (PC) security in the era of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) involves two interrelated elements: safeguarding the basic computer system itself and protecting the information it contains and transmits, including personal files. HIPAA regulations have toughened the requirements for securing patient information, requiring every radiologist with such data to take further precautions. Security starts with physically securing the computer. Account passwords and a password-protected screen saver should also be set up. A modern antivirus program can easily be installed and configured. File scanning and updating of virus definitions are simple processes that can largely be automated and should be performed at least weekly. A software firewall is also essential for protection from outside intrusion, and an inexpensive hardware firewall can provide yet another layer of protection. An Internet security suite yields additional safety. Regular updating of the security features of installed programs is important. Obtaining a moderate degree of PC safety and security is somewhat inconvenient but is necessary and well worth the effort. Copyright RSNA, 2003

  16. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    NASA Astrophysics Data System (ADS)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.

    2017-02-01

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.

  17. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  18. OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping

    2017-02-01

    The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.

  19. Data file, Continental Margin Program, Atlantic Coast of the United States: vol. 2 sample collection and analytical data

    USGS Publications Warehouse

    Hathaway, John C.

    1971-01-01

    The purpose of the data file presented below is twofold: the first purpose is to make available in printed form the basic data relating to the samples collected as part of the joint U.S. Geological Survey - Woods Hole Oceanographic Institution program of study of the Atlantic continental margin of the United States; the second purpose is to maintain these data in a form that is easily retrievable by modern computer methods. With the data in such form, repeate manual transcription for statistical or similar mathematical treatment becomes unnecessary. Manual plotting of information or derivatives from the information may also be eliminated. Not only is handling of data by the computer considerably faster than manual techniques, but a fruitful source of errors, transcription mistakes, is eliminated.

  20. Geospace simulations using modern accelerator processor technology

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D. J.

    2009-12-01

    OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.

  1. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  2. Perimetry update.

    PubMed

    Leydhecker, W

    1983-06-01

    The possible applications of computer-assisted static perimetry are examined after five years of personal controlled studies of different types of computerized perimeters. The common advantage of all computer-assisted perimeters is the elimination of the influence of the perimetrist on the results. However, some perimeters are fully computer-assisted and some are only partially so. Even after complete elimination of the perimetrist's influence, some physiological and psychological influences remain due to the patient and will cause fluctuations of the results. In perimeters, the density of the grid and the adaptive strategy of exact threshold measurement are important in obtaining reproducible results. A compromise between duration of the test and exactness has to be found. The most acceptable compromise seems to be an uneven distribution of stimuli, which form a denser grid in areas of special interest and a wider grid in areas less likely to be involved, combined with exact threshold measurements only in suspicious areas. Multiple stimulus presentation is not adequate. High sensitivity of screening is not a great advantage, unless combined with a high specificity. We have shown that using the same stimulus luminosity for center and periphery (a nonadaptive strategy) produces nonspecific results. Only an adaptive strategy can result in high sensitivity and specificity. Adaptive strategy means that the luminosity of the stimulus is adapted to the individual threshold curve of the visual field. In addition, the exact individual thresholds are bracketed by small up and down steps of variation in luminosity. In some cases, scanning programs with two levels of adaptation can be sufficient. The user of modern perimeters must understand such terms as: asb, dB, presentation time, and diameter of stimuli. Projection of stimuli is preferred to light emitting diodes or glass fiber optics. The programs (software) of the modern instruments are of the greatest importance, because the clinical experience that the perimetrist had to acquire in previous manual perimetry is incorporated in these programs. In the Octopus perimeter a delta program is available that differentiates patient fluctuations that may be insignificant from directed significant alterations of the field which might require alteration of therapy. The programs are listed for different computer-assisted perimeters, and their choice is described. The costs of the perimeters are also given. Many controlled clinical studies are quoted briefly where they are useful for understanding the discussion. A brief chapter deals with the reliability of the perimetric test.(ABSTRACT TRUNCATED AT 400 WORDS)

  3. Computational Chemistry Using Modern Electronic Structure Methods

    ERIC Educational Resources Information Center

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  4. Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact

    NASA Astrophysics Data System (ADS)

    Abadjiev, Valentin; Kawasaki, Haruhisa

    2014-09-01

    The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters. The study is dedicated to accepted methodology in the creation of soft- ware for the synthesis of a class high reduction hyperboloid gears - Spiroid and Helicon ones (Spiroid and Helicon are trademarks registered by the Illinois Tool Works, Chicago, Ill). The developed basic computer products belong to software, based on original mathematical models. They are based on the two mathematical models for the synthesis: "upon a pitch contact point" and "upon a mesh region". Computer programs are worked out on the basis of the described mathematical models, and the relations between them are shown. The application of the shown approaches to the synthesis of commented gear drives is illustrated.

  5. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    NASA Astrophysics Data System (ADS)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  6. Computer programing for geosciences: Teach your students how to make tools

    NASA Astrophysics Data System (ADS)

    Grapenthin, Ronni

    2011-12-01

    When I announced my intention to pursue a Ph.D. in geophysics, some people gave me confused looks, because I was working on a master's degree in computer science at the time. My friends, like many incoming geoscience graduate students, have trouble linking these two fields. From my perspective, it is pretty straightforward: Much of geoscience evolves around novel analyses of large data sets that require custom tools—computer programs—to minimize the drudgery of manual data handling; other disciplines share this characteristic. While most faculty adapted to the need for tool development quite naturally, as they grew up around computer terminal interfaces, incoming graduate students lack intuitive understanding of programing concepts such as generalization and automation. I believe the major cause is the intuitive graphical user interfaces of modern operating systems and applications, which isolate the user from all technical details. Generally, current curricula do not recognize this gap between user and machine. For students to operate effectively, they require specialized courses teaching them the skills they need to make tools that operate on particular data sets and solve their specific problems. Courses in computer science departments are aimed at a different audience and are of limited help.

  7. Distributed computing for macromolecular crystallography

    PubMed Central

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Ballard, Charles

    2018-01-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community. PMID:29533240

  8. APA Summit on Medical Student Education Task Force on Informatics and Technology: learning about computers and applying computer technology to education and practice.

    PubMed

    Hilty, Donald M; Hales, Deborah J; Briscoe, Greg; Benjamin, Sheldon; Boland, Robert J; Luo, John S; Chan, Carlyle H; Kennedy, Robert S; Karlinsky, Harry; Gordon, Daniel B; Yager, Joel; Yellowlees, Peter M

    2006-01-01

    This article provides a brief overview of important issues for educators regarding medical education and technology. The literature describes key concepts, prototypical technology tools, and model programs. A work group of psychiatric educators was convened three times by phone conference to discuss the literature. Findings were presented to and input was received from the 2005 Summit on Medical Student Education by APA and the American Directors of Medical Student Education in Psychiatry. Knowledge of, skills in, and attitudes toward medical informatics are important to life-long learning and modern medical practice. A needs assessment is a starting place, since student, faculty, institution, and societal factors bear consideration. Technology needs to "fit" into a curriculum in order to facilitate learning and teaching. Learning about computers and applying computer technology to education and clinical care are key steps in computer literacy for physicians.

  9. Distributed computing for macromolecular crystallography.

    PubMed

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles

    2018-02-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.

  10. Computer-assisted learning in critical care: from ENIAC to HAL.

    PubMed

    Tegtmeyer, K; Ibsen, L; Goldstein, B

    2001-08-01

    Computers are commonly used to serve many functions in today's modern intensive care unit. One of the most intriguing and perhaps most challenging applications of computers has been to attempt to improve medical education. With the introduction of the first computer, medical educators began looking for ways to incorporate their use into the modern curriculum. Prior limitations of cost and complexity of computers have consistently decreased since their introduction, making it increasingly feasible to incorporate computers into medical education. Simultaneously, the capabilities and capacities of computers have increased. Combining the computer with other modern digital technology has allowed the development of more intricate and realistic educational tools. The purpose of this article is to briefly describe the history and use of computers in medical education with special reference to critical care medicine. In addition, we will examine the role of computers in teaching and learning and discuss the types of interaction between the computer user and the computer.

  11. Ship Structure Committee Long-Range Research Plan - Guidelines for Program Development.

    DTIC Science & Technology

    1982-01-01

    many scientists and engineers who contributed their time and expertise. We are indebted especially to Mr. J. J. Hopkinson, Dr. J. G. Giannotti, Mr...projection is better than assuming extension of the status quo. There is a long lead time in the use of new knowledge. Scientific research maturation...They even promise to remove traditional constraints previously too intractable to be labelled problems. The modern computer is an outstanding example

  12. DoD High Performance Computing Modernization Program FY16 Annual Report

    DTIC Science & Technology

    2018-05-02

    vortex shedding from rotor blade tips using adaptive mesh refinement gives Helios the unique capability to assess the interaction of these vortices...with the fuselage and nearby rotor blades . Helios provides all the benefits for rotary-winged aircraft that Kestrel does for fixed-wing aircraft...rotor blade upgrade of the CH-47F Chinook helicopter to achieve up to an estimated 2,000 pounds increase in hover thrust (~10%) with limited

  13. 2009 High Performance Computing Modernization Program Users Group Conference

    DTIC Science & Technology

    2009-06-17

    Asymmetric Threats Future Peer GWoT / ungoverned areas Irregular Warfare Low-end Asymmetric 1-4-2-1 (State-to-State War) Disruptive technologies Superiority...2008 “As changes in this century’s threat environment create strategic challenges – irregular warfare, weapons of mass destruction, disruptive ... technologies – this request places greater emphasis on basic research, which in recent years has not kept pace with other parts of the budget.” • Personnel

  14. Neuropeptide Signaling Networks and Brain Circuit Plasticity.

    PubMed

    McClard, Cynthia K; Arenkiel, Benjamin R

    2018-01-01

    The brain is a remarkable network of circuits dedicated to sensory integration, perception, and response. The computational power of the brain is estimated to dwarf that of most modern supercomputers, but perhaps its most fascinating capability is to structurally refine itself in response to experience. In the language of computers, the brain is loaded with programs that encode when and how to alter its own hardware. This programmed "plasticity" is a critical mechanism by which the brain shapes behavior to adapt to changing environments. The expansive array of molecular commands that help execute this programming is beginning to emerge. Notably, several neuropeptide transmitters, previously best characterized for their roles in hypothalamic endocrine regulation, have increasingly been recognized for mediating activity-dependent refinement of local brain circuits. Here, we discuss recent discoveries that reveal how local signaling by corticotropin-releasing hormone reshapes mouse olfactory bulb circuits in response to activity and further explore how other local neuropeptide networks may function toward similar ends.

  15. Trends in Programming Languages for Neuroscience Simulations

    PubMed Central

    Davison, Andrew P.; Hines, Michael L.; Muller, Eilif

    2009-01-01

    Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing. PMID:20198154

  16. TORC3: Token-ring clearing heuristic for currency circulation

    NASA Astrophysics Data System (ADS)

    Humes, Carlos, Jr.; Lauretto, Marcelo S.; Nakano, Fábio; Pereira, Carlos A. B.; Rafare, Guilherme F. G.; Stern, Julio Michael

    2012-10-01

    Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Clearing Algorithm for Currency Circulation. In contrast to the MILP approach, TORC3 is a real time heuristic procedure, demanding modest computational resources, and able to completely shield the clearing operation against the participating agents' risk of default.

  17. Computational Physics in a Nutshell

    NASA Astrophysics Data System (ADS)

    Schillaci, Michael

    2001-11-01

    Too often students of science are expected to ``pick-up'' what they need to know about the Art of Science. A description of the two-semester Computational Physics course being taught by the author offers a remedy to this situation. The course teaches students the three pillars of modern scientific research: Problem Solving, Programming, and Presentation. Using FORTRAN, LaTeXe, MAPLE V, HTML, and JAVA, students learn the fundamentals of algorithm development, how to implement classes and packages written by others, how to produce publication quality graphics and documents and how to publish them on the world-wide-web. The course content is outlined and project examples are offered.

  18. Trends in programming languages for neuroscience simulations.

    PubMed

    Davison, Andrew P; Hines, Michael L; Muller, Eilif

    2009-01-01

    Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing.

  19. A versatile system for the rapid collection, handling and graphics analysis of multidimensional data

    NASA Astrophysics Data System (ADS)

    O'Brien, P. M.; Moloney, G.; O'Connor, A.; Legge, G. J. F.

    1993-05-01

    The aim of this work was to provide a versatile system for handling multiparameter data that may arise from a variety of experiments — nuclear, AMS, microprobe elemental analysis, 3D microtomography etc. Some of the most demanding requirements arise in the application of microprobes to quantitative elemental mapping and to microtomography. A system to handle data from such experiments had been under continuous development and use at MARC for the past 15 years. It has now been made adaptable to the needs of multiparameter (or single parameter) experiments in general. The original system has been rewritten, greatly expanded and made much more powerful and faster, by use of modern computer technology — a VME bus computer with a real time operating system and a RISC workstation running Unix and the X Window system. This provides the necessary (i) power, speed and versatility, (ii) expansion and updating capabilities (iii) standardisation and adaptability, (iv) coherent modular programming structures, (v) ability to interface to other programs and (vi) transparent operation with several levels, involving the use of menus, programmed function keys and powerful macro programming facilities.

  20. Computational Materials Program for Alloy Design

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo

    2005-01-01

    The research program sponsored by this grant, "Computational Materials Program for Alloy Design", covers a period of time of enormous change in the emerging field of computational materials science. The computational materials program started with the development of the BFS method for alloys, a quantum approximate method for atomistic analysis of alloys specifically tailored to effectively deal with the current challenges in the area of atomistic modeling and to support modern experimental programs. During the grant period, the program benefited from steady growth which, as detailed below, far exceeds its original set of goals and objectives. Not surprisingly, by the end of this grant, the methodology and the computational materials program became an established force in the materials communitiy, with substantial impact in several areas. Major achievements during the duration of the grant include the completion of a Level 1 Milestone for the HITEMP program at NASA Glenn, consisting of the planning, development and organization of an international conference held at the Ohio Aerospace Institute in August of 2002, finalizing a period of rapid insertion of the methodology in the research community worlwide. The conference, attended by citizens of 17 countries representing various fields of the research community, resulted in a special issue of the leading journal in the area of applied surface science. Another element of the Level 1 Milestone was the presentation of the first version of the Alloy Design Workbench software package, currently known as "adwTools". This software package constitutes the first PC-based piece of software for atomistic simulations for both solid alloys and surfaces in the market.Dissemination of results and insertion in the materials community worldwide was a primary focus during this period. As a result, the P.I. was responsible for presenting 37 contributed talks, 19 invited talks, and publishing 71 articles in peer-reviewed journals, as detailed later in this Report.

  1. nMoldyn: a program package for a neutron scattering oriented analysis of molecular dynamics simulations.

    PubMed

    Róg, T; Murzyn, K; Hinsen, K; Kneller, G R

    2003-04-15

    We present a new implementation of the program nMoldyn, which has been developed for the computation and decomposition of neutron scattering intensities from Molecular Dynamics trajectories (Comp. Phys. Commun 1995, 91, 191-214). The new implementation extends the functionality of the original version, provides a much more convenient user interface (both graphical/interactive and batch), and can be used as a tool set for implementing new analysis modules. This was made possible by the use of a high-level language, Python, and of modern object-oriented programming techniques. The quantities that can be calculated by nMoldyn are the mean-square displacement, the velocity autocorrelation function as well as its Fourier transform (the density of states) and its memory function, the angular velocity autocorrelation function and its Fourier transform, the reorientational correlation function, and several functions specific to neutron scattering: the coherent and incoherent intermediate scattering functions with their Fourier transforms, the memory function of the coherent scattering function, and the elastic incoherent structure factor. The possibility to compute memory function is a new and powerful feature that allows to relate simulation results to theoretical studies. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 657-667, 2003

  2. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE PAGES

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...

    2017-03-20

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  3. Modernization and optimization of a legacy open-source CFD code for high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha

    Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less

  4. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  5. Parallel Computational Protein Design.

    PubMed

    Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang

    2017-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.

  6. QMachine: commodity supercomputing in web browsers.

    PubMed

    Wilkinson, Sean R; Almeida, Jonas S

    2014-06-09

    Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.

  7. Manipulation and handling processes off-line programming and optimization with use of K-Roset

    NASA Astrophysics Data System (ADS)

    Gołda, G.; Kampa, A.

    2017-08-01

    Contemporary trends in development of efficient, flexible manufacturing systems require practical implementation of modern “Lean production” concepts for maximizing customer value through minimizing all wastes in manufacturing and logistics processes. Every FMS is built on the basis of automated and robotized production cells. Except flexible CNC machine tools and other equipments, the industrial robots are primary elements of the system. In the studies, authors look for wastes of time and cost in real tasks of robots, during manipulation processes. According to aspiration for optimization of handling and manipulation processes with use of the robots, the application of modern off-line programming methods and computer simulation, is the best solution and it is only way to minimize unnecessary movements and other instructions. The modelling process of robotized production cell and offline programming of Kawasaki robots in AS-Language will be described. The simulation of robotized workstation will be realized with use of virtual reality software K-Roset. Authors show the process of industrial robot’s programs improvement and optimization in terms of minimizing the number of useless manipulator movements and unnecessary instructions. This is realized in order to shorten the time of production cycles. This will also reduce costs of handling, manipulations and technological process.

  8. Ultrasound Dopplerography of abdomen pathology using statistical computer programs

    NASA Astrophysics Data System (ADS)

    Dmitrieva, Irina V.; Arakelian, Sergei M.; Wapota, Alberto R. W.

    1998-04-01

    The modern ultrasound dopplerography give us the big possibilities in investigation of gemodynamical changes in all stages of abdomen pathology. Many of researches devoted to using of noninvasive methods in practical medicine. Now ultrasound Dopplerography is one of the basic one. We investigated 250 patients from 30 to 77 ages, including 149 men and 101 women. The basic diagnosis of all patients was the Ischaemic Pancreatitis. The Second diagnoses of pathology were the Ischaemic Disease of Heart, Gypertension, Atherosclerosis, Diabet, Vascular Disease of Extremities. We researched the abdominal aorta and her branches: Arteria Mesenterica Superior (AMS), truncus coeliacus (TC), arteria hepatica communis (AHC), arteria lienalis (AL). For investigation we use the following equipment: ACUSON 128 XP/10c, BIOMEDIC, GENERAL ELECTRIC (USA, Japan). We analyzed the following componetns of gemodynamical changes of abdominal vessels: index of pulsation, index of resistance, ratio of systol-dystol, speed of blood circulation. Statistical program included the following one: 'basic statistic's,' 'analytic program.' In conclusion we determined that the all gemodynamical components of abdominal vessels had considerable changes in abdominal ischaemia than in normal situation. Using the computer's program for definition degree of gemodynamical changes, we can recommend the individual plan of diagnostical and treatment program.

  9. Effective approach to spectroscopy and spectral analysis techniques using Matlab

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Lv, Yong

    2017-08-01

    With the development of electronic information, computer and network, modern education technology has entered new era, which would give a great impact on teaching process. Spectroscopy and spectral analysis is an elective course for Optoelectronic Information Science and engineering. The teaching objective of this course is to master the basic concepts and principles of spectroscopy, spectral analysis and testing of basic technical means. Then, let the students learn the principle and technology of the spectrum to study the structure and state of the material and the developing process of the technology. MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and fourth-generation programming language. A proprietary programming language developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, Based on the teaching practice, this paper summarizes the new situation of applying Matlab to the teaching of spectroscopy. This would be suitable for most of the current school multimedia assisted teaching

  10. Computer assisted screening, correction, and analysis of historical weather measurements

    NASA Astrophysics Data System (ADS)

    Burnette, Dorian J.; Stahle, David W.

    2013-04-01

    A computer program, Historical Observation Tools (HOB Tools), has been developed to facilitate many of the calculations used by historical climatologists to develop instrumental and documentary temperature and precipitation datasets and makes them readily accessible to other researchers. The primitive methodology used by the early weather observers makes the application of standard techniques difficult. HOB Tools provides a step-by-step framework to visually and statistically assess, adjust, and reconstruct historical temperature and precipitation datasets. These routines include the ability to check for undocumented discontinuities, adjust temperature data for poor thermometer exposures and diurnal averaging, and assess and adjust daily precipitation data for undercount. This paper provides an overview of the Visual Basic.NET program and a demonstration of how it can assist in the development of extended temperature and precipitation datasets using modern and early instrumental measurements from the United States.

  11. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  12. Advanced training systems

    NASA Technical Reports Server (NTRS)

    Savely, Robert T.; Loftin, R. Bowen

    1990-01-01

    Training is a major endeavor in all modern societies. Common training methods include training manuals, formal classes, procedural computer programs, simulations, and on-the-job training. NASA's training approach has focussed primarily on on-the-job training in a simulation environment for both crew and ground based personnel. NASA must explore new approaches to training for the 1990's and beyond. Specific autonomous training systems are described which are based on artificial intelligence technology for use by NASA astronauts, flight controllers, and ground based support personnel that show an alternative to current training systems. In addition to these specific systems, the evolution of a general architecture for autonomous intelligent training systems that integrates many of the features of traditional training programs with artificial intelligence techniques is presented. These Intelligent Computer Aided Training (ICAT) systems would provide much of the same experience that could be gained from the best on-the-job training.

  13. NASA's program on icing research and technology

    NASA Technical Reports Server (NTRS)

    Reinmann, John J.; Shaw, Robert J.; Ranaudo, Richard J.

    1989-01-01

    NASA's program in aircraft icing research and technology is reviewed. The program relies heavily on computer codes and modern applied physics technology in seeking icing solutions on a finer scale than those offered in earlier programs. Three major goals of this program are to offer new approaches to ice protection, to improve our ability to model the response of an aircraft to an icing encounter, and to provide improved techniques and facilities for ground and flight testing. This paper reviews the following program elements: (1) new approaches to ice protection; (2) numerical codes for deicer analysis; (3) measurement and prediction of ice accretion and its effect on aircraft and aircraft components; (4) special wind tunnel test techniques for rotorcraft icing; (5) improvements of icing wind tunnels and research aircraft; (6) ground de-icing fluids used in winter operation; (7) fundamental studies in icing; and (8) droplet sizing instruments for icing clouds.

  14. Hydrological Monitoring System Design and Implementation Based on IOT

    NASA Astrophysics Data System (ADS)

    Han, Kun; Zhang, Dacheng; Bo, Jingyi; Zhang, Zhiguang

    In this article, an embedded system development platform based on GSM communication is proposed. Through its application in hydrology monitoring management, the author makes discussion about communication reliability and lightning protection, suggests detail solutions, and also analyzes design and realization of upper computer software. Finally, communication program is given. Hydrology monitoring system from wireless communication network is a typical practical application of embedded system, which has realized intelligence, modernization, high-efficiency and networking of hydrology monitoring management.

  15. Effects of rotation on coolant passage heat transfer. Volume 1: Coolant passages with smooth walls

    NASA Technical Reports Server (NTRS)

    Hajek, T. J.; Wagner, J. H.; Johnson, B. V.; Higgins, A. W.; Steuber, G. D.

    1991-01-01

    An experimental program was conducted to investigate heat transfer and pressure loss characteristics of rotating multipass passages, for configurations and dimensions typical of modern turbine blades. The immediate objective was the generation of a data base of heat transfer and pressure loss data required to develop heat transfer correlations and to assess computational fluid dynamic techniques for rotating coolant passages. Experiments were conducted in a smooth wall large scale heat transfer model.

  16. High Performance Computing Modernization Program Kerberos Throughput Test Report

    DTIC Science & Technology

    2017-10-26

    functionality as Kerberos plugins. The pre -release production kit was used in these tests to compare against the current release kit. YubiKey support...HPCMP Kerberos Throughput Test Report 3 2. THROUGHPUT TESTING 2.1 Testing Components Throughput testing was done to determine the benefits of the pre ...both the current release kit and the pre -release production kit for a total of 378 individual tests in order to note any improvements. Based on work

  17. Cryogenic mirror analysis

    NASA Technical Reports Server (NTRS)

    Nagy, S.

    1988-01-01

    Due to extraordinary distances scanned by modern telescopes, optical surfaces in such telescopes must be manufactured to unimaginable standards of perfection of a few thousandths of a centimeter. The detection of imperfections of less than 1/20 of a wavelength of light, for application in the building of the mirror for the Space Infrared Telescope Facility, was undertaken. Because the mirror must be kept very cold while in space, another factor comes into effect: cryogenics. The process to test a specific morror under cryogenic conditions is described; including the follow-up analysis accomplished through computer work. To better illustrate the process and analysis, a Pyrex Hex-Core mirror is followed through the process from the laser interferometry in the lab, to computer analysis via a computer program called FRINGE. This analysis via FRINGE is detailed.

  18. Mendel-GPU: haplotyping and genotype imputation on graphics processing units

    PubMed Central

    Chen, Gary K.; Wang, Kai; Stram, Alex H.; Sobel, Eric M.; Lange, Kenneth

    2012-01-01

    Motivation: In modern sequencing studies, one can improve the confidence of genotype calls by phasing haplotypes using information from an external reference panel of fully typed unrelated individuals. However, the computational demands are so high that they prohibit researchers with limited computational resources from haplotyping large-scale sequence data. Results: Our graphics processing unit based software delivers haplotyping and imputation accuracies comparable to competing programs at a fraction of the computational cost and peak memory demand. Availability: Mendel-GPU, our OpenCL software, runs on Linux platforms and is portable across AMD and nVidia GPUs. Users can download both code and documentation at http://code.google.com/p/mendel-gpu/. Contact: gary.k.chen@usc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22954633

  19. 24 CFR 901.15 - Indicator #2, modernization.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Indicator #2, modernization. 901.15... DEVELOPMENT PUBLIC HOUSING MANAGEMENT ASSESSMENT PROGRAM § 901.15 Indicator #2, modernization. This indicator is automatically excluded if a PHA does not have a modernization program. This indicator examines the...

  20. The environment power system analysis tool development program

    NASA Technical Reports Server (NTRS)

    Jongeward, Gary A.; Kuharski, Robert A.; Kennedy, Eric M.; Stevens, N. John; Putnam, Rand M.; Roche, James C.; Wilcox, Katherine G.

    1990-01-01

    The Environment Power System Analysis Tool (EPSAT) is being developed to provide space power system design engineers with an analysis tool for determining system performance of power systems in both naturally occurring and self-induced environments. The program is producing an easy to use computer aided engineering (CAE) tool general enough to provide a vehicle for technology transfer from space scientists and engineers to power system design engineers. The results of the project after two years of a three year development program are given. The EPSAT approach separates the CAE tool into three distinct functional units: a modern user interface to present information, a data dictionary interpreter to coordinate analysis; and a data base for storing system designs and results of analysis.

  1. Thermal Destruction Of CB Contaminants Bound On Building ...

    EPA Pesticide Factsheets

    Symposium Paper An experimental and theoretical program has been initiated by the U.S. EPA to investigate issues of chemical/biological agent destruction in incineration systems when the agent in question is bound on common porous building interior materials. This program includes 3-dimensional computational fluid dynamics modeling with matrix-bound agent destruction kinetics, bench-scale experiments to determine agent destruction kinetics while bound on various matrices, and pilot-scale experiments to scale-up the bench-scale experiments to a more practical scale. Finally, model predictions are made to predict agent destruction and combustion conditions in two full-scale incineration systems that are typical of modern combustor design.

  2. GPU COMPUTING FOR PARTICLE TRACKING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Song, Kai; Muriki, Krishna

    2011-03-25

    This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less

  3. [The application of the computer technologies for the mathematical simulation of the ethmoidal labyrinth].

    PubMed

    Markeeva, M V; Mareev, O V; Nikolenko, V N; Mareev, G O; Danilova, T V; Fadeeva, E A; Fedorov, R V

    The objective of the present work was to study the relationship between the dimensions of the ethmoidal labyrinth and the skull in the subjects differing in the nose shape by means of the factorial and correlation analysis with the application of the modern computer-assisted methods for the three-dimensional reconstruction of the skull. We developed an original method for computed craniometry with the use the original program that made it possible to determine the standard intravital craniometrics characteristics of the human skull with a high degree of accuracy based on the results of analysis of 200 computed tomograms of the head. It was shown that the length of the inferior turbinated bones and the posterior edge of the orbital plate is of special relevance for practically all parameters of the ethmoidal labyrinth. Also, the width of the choanae positively relates to the height of the ethmoidal labyrinth.

  4. SU-E-J-114: Web-Browser Medical Physics Applications Using HTML5 and Javascript.

    PubMed

    Bakhtiari, M

    2012-06-01

    Since 2010, there has been a great attention about HTML5. Application developers and browser makers fully embrace and support the web of the future. Consumers have started to embrace HTML5, especially as more users understand the benefits and potential that HTML5 can mean for the future.Modern browsers such as Firefox, Google Chrome, and Safari are offering better and more robust support for HTML5, CSS3, and JavaScript. The idea is to introduce the HTML5 to medical physics community for open source software developments. The benefit of using HTML5 is developing portable software systems. The HTML5, CSS, and JavaScript programming languages were used to develop several applications for Quality Assurance in radiation therapy. The canvas element of HTML5 was used for handling and displaying the images, and JavaScript was used to manipulate the data. Sample application were developed to: 1. analyze the flatness and symmetry of the radiotherapy fields in a web browser, 2.analyze the Dynalog files from Varian machines, 3. visualize the animated Dynamic MLC files, 4. Simulation via Monte Carlo, and 5. interactive image manipulation. The programs showed great performance and speed in uploading the data and displaying the results. The flatness and symmetry program and Dynalog file analyzer ran in a fraction of second. The reason behind this performance is using JavaScript language which is a lower level programming language in comparison to the most of the scientific programming packages such as Matlab. The second reason is that JavaScript runs locally on client side computers not on the web-servers. HTML5 and JavaScript can be used to develop useful applications that can be run online or offline on different modern web-browsers. The programming platform can be also one of the modern web-browsers which are mostly open source (such as Firefox). © 2012 American Association of Physicists in Medicine.

  5. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Usingmore » this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.« less

  6. Computers and neurosurgery.

    PubMed

    Shaikhouni, Ammar; Elder, J Bradley

    2012-11-01

    At the turn of the twentieth century, the only computational device used in neurosurgical procedures was the brain of the surgeon. Today, most neurosurgical procedures rely at least in part on the use of a computer to help perform surgeries accurately and safely. The techniques that revolutionized neurosurgery were mostly developed after the 1950s. Just before that era, the transistor was invented in the late 1940s, and the integrated circuit was invented in the late 1950s. During this time, the first automated, programmable computational machines were introduced. The rapid progress in the field of neurosurgery not only occurred hand in hand with the development of modern computers, but one also can state that modern neurosurgery would not exist without computers. The focus of this article is the impact modern computers have had on the practice of neurosurgery. Neuroimaging, neuronavigation, and neuromodulation are examples of tools in the armamentarium of the modern neurosurgeon that owe each step in their evolution to progress made in computer technology. Advances in computer technology central to innovations in these fields are highlighted, with particular attention to neuroimaging. Developments over the last 10 years in areas of sensors and robotics that promise to transform the practice of neurosurgery further are discussed. Potential impacts of advances in computers related to neurosurgery in developing countries and underserved regions are also discussed. As this article illustrates, the computer, with its underlying and related technologies, is central to advances in neurosurgery over the last half century. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Application of Modern Fortran to Spacecraft Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Williams, Jacob; Falck, Robert D.; Beekman, Izaak B.

    2018-01-01

    In this paper, applications of the modern Fortran programming language to the field of spacecraft trajectory optimization and design are examined. Modern object-oriented Fortran has many advantages for scientific programming, although many legacy Fortran aerospace codes have not been upgraded to use the newer standards (or have been rewritten in other languages perceived to be more modern). NASA's Copernicus spacecraft trajectory optimization program, originally a combination of Fortran 77 and Fortran 95, has attempted to keep up with modern standards and makes significant use of the new language features. Various algorithms and methods are presented from trajectory tools such as Copernicus, as well as modern Fortran open source libraries and other projects.

  8. Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models

    NASA Astrophysics Data System (ADS)

    Chu, A.

    2014-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.

  9. Atomdroid: a computational chemistry tool for mobile platforms.

    PubMed

    Feldt, Jonas; Mata, Ricardo A; Dieterich, Johannes M

    2012-04-23

    We present the implementation of a new molecular mechanics program designed for use in mobile platforms, the first specifically built for these devices. The software is designed to run on Android operating systems and is compatible with several modern tablet-PCs and smartphones available in the market. It includes molecular viewer/builder capabilities with integrated routines for geometry optimizations and Monte Carlo simulations. These functionalities allow it to work as a stand-alone tool. We discuss some particular development aspects, as well as the overall feasibility of using computational chemistry software packages in mobile platforms. Benchmark calculations show that through efficient implementation techniques even hand-held devices can be used to simulate midsized systems using force fields.

  10. Applying Ada to Beech Starship avionics

    NASA Technical Reports Server (NTRS)

    Funk, David W.

    1986-01-01

    As Ada solidified in its development, it became evident that it offered advantages for avionics systems because of it support for modern software engineering principles and real time applications. An Ada programming support environment was developed for two major avionics subsystems in the Beech Starship. The two subsystems include electronic flight instrument displays and the flight management computer system. Both of these systems use multiple Intel 80186 microprocessors. The flight management computer provides flight planning, navigation displays, primary flight display of checklists and other pilot advisory information. Together these systems represent nearly 80,000 lines of Ada source code and to date approximately 30 man years of effort. The Beech Starship avionics systems are in flight testing.

  11. The influence of commenting validity, placement, and style on perceptions of computer code trustworthiness: A heuristic-systematic processing approach.

    PubMed

    Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August

    2018-07-01

    Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Optimizing Interactive Development of Data-Intensive Applications

    PubMed Central

    Interlandi, Matteo; Tetali, Sai Deep; Gulzar, Muhammad Ali; Noor, Joseph; Condie, Tyson; Kim, Miryung; Millstein, Todd

    2017-01-01

    Modern Data-Intensive Scalable Computing (DISC) systems are designed to process data through batch jobs that execute programs (e.g., queries) compiled from a high-level language. These programs are often developed interactively by posing ad-hoc queries over the base data until a desired result is generated. We observe that there can be significant overlap in the structure of these queries used to derive the final program. Yet, each successive execution of a slightly modified query is performed anew, which can significantly increase the development cycle. Vega is an Apache Spark framework that we have implemented for optimizing a series of similar Spark programs, likely originating from a development or exploratory data analysis session. Spark developers (e.g., data scientists) can leverage Vega to significantly reduce the amount of time it takes to re-execute a modified Spark program, reducing the overall time to market for their Big Data applications. PMID:28405637

  13. Evolution of a minimal parallel programming model

    DOE PAGES

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-04-30

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less

  14. Ah!Help: A generalized on-line help facility

    NASA Technical Reports Server (NTRS)

    Yu, Wong Nai; Mantooth, Charmiane; Soulahakil, Alex

    1986-01-01

    The idea behind the help facility discussed is relatively simple. It is made unique by the fact that it is written in Ada and uses aspects of the language which make information retrieval rapid and simple. Specifically, the DIRECT IO facility allows for random access into the help files. It is necessary to discuss the advantages of random access over sequential access. The mere fact that the program in written in Ada implies a saving in terms of lines of code. This introduces the possibility of eventually adapting the program to run at the microcomputer level, a major consideration . Additionally, since the program uses only standard Ada generics, it is portable to other systems. This is another aspect which must always be taken into consideration in writting any software package in the modern day world of computer programming.

  15. A new state space model for the NASA/JPL 70-meter antenna servo controls

    NASA Technical Reports Server (NTRS)

    Hill, R. E.

    1987-01-01

    A control axis referenced model of the NASA/JPL 70-m antenna structure is combined with the dynamic equations of servo components to produce a comprehansive state variable (matrix) model of the coupled system. An interactive Fortran program for generating the linear system model and computing its salient parameters is described. Results are produced in a state variable, block diagram, and in factored transfer function forms to facilitate design and analysis by classical as well as modern control methods.

  16. [Image guided and robotic treatment--the advance of cybernetics in clinical medicine].

    PubMed

    Fosse, E; Elle, O J; Samset, E; Johansen, M; Røtnes, J S; Tønnessen, T I; Edwin, B

    2000-01-10

    The introduction of advanced technology in hospitals has changed the treatment practice towards more image guided and minimal invasive procedures. Modern computer and communication technology opens up for robot aided and pre-programmed intervention. Several robotic systems are in clinical use today both in microsurgery and in major cardiac and orthopedic operations. As this trend develops, professions which are new in this context such as physicists, mathematicians and cybernetic engineers will be increasingly important in the treatment of patients.

  17. DoD HPC Insights Fall 2016A publication of the Department of Defense High Performance Computing Modernization Program

    DTIC Science & Technology

    2016-09-01

    HPCMP will continue to be a key resource in solving challenging problems for the Department of Defense . 1 Fall 2016 High-F idel i ty Simulat ions of...laser interactions. The group had studied plasma expansion experimentally, but this wasn’t sufficient to understand the problem . Feister adapted and...focused on increasing the efficiency of jet turbine engines and extending aircraft flight ranges by changing the shape (articulation) of the turbine

  18. UI Review Results and NARAC Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, J.; Eme, B.; Kim, S.

    2017-03-08

    This report describes the results of an inter-program design review completed February 16th, 2017, during the second year of a FY16-FY18 NA-84 Technology Integration (TI) project to modernize the core software system used in DOE/NNSA's National Atmospheric Release Advisory Center (NARAC, narac.llnl.gov). This review focused on the graphical user interfaces (GUI) frameworks. Reviewers (described in Appendix 2) were selected from multiple areas of the LLNL Computation directorate, based on their expertise in GUI and Web technologies.

  19. Department of Defense Healthcare Management System Modernization (DHMSM)

    DTIC Science & Technology

    2016-03-01

    2016 Major Automated Information System Annual Report Department of Defense Healthcare Management System Modernization (DHMSM) Defense...DSN Fax: Date Assigned: November 16, 2015 Program Information Program Name Department of Defense Healthcare Management System Modernization...DHMSM) DoD Component DoD The acquiring DoD Component is Program Executive Office (PEO) Department of Defense (DoD) Healthcare Management Systems (DHMS

  20. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    NASA Astrophysics Data System (ADS)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  1. Tempest: GPU-CPU computing for high-throughput database spectral matching.

    PubMed

    Milloy, Jeffrey A; Faherty, Brendan K; Gerber, Scott A

    2012-07-06

    Modern mass spectrometers are now capable of producing hundreds of thousands of tandem (MS/MS) spectra per experiment, making the translation of these fragmentation spectra into peptide matches a common bottleneck in proteomics research. When coupled with experimental designs that enrich for post-translational modifications such as phosphorylation and/or include isotopically labeled amino acids for quantification, additional burdens are placed on this computational infrastructure by shotgun sequencing. To address this issue, we have developed a new database searching program that utilizes the massively parallel compute capabilities of a graphical processing unit (GPU) to produce peptide spectral matches in a very high throughput fashion. Our program, named Tempest, combines efficient database digestion and MS/MS spectral indexing on a CPU with fast similarity scoring on a GPU. In our implementation, the entire similarity score, including the generation of full theoretical peptide candidate fragmentation spectra and its comparison to experimental spectra, is conducted on the GPU. Although Tempest uses the classical SEQUEST XCorr score as a primary metric for evaluating similarity for spectra collected at unit resolution, we have developed a new "Accelerated Score" for MS/MS spectra collected at high resolution that is based on a computationally inexpensive dot product but exhibits scoring accuracy similar to that of the classical XCorr. In our experience, Tempest provides compute-cluster level performance in an affordable desktop computer.

  2. SNAPSHOT: A MODERN, SUSTAINABLE HOLDUP MEASUREMENT SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowe, Nathan C; Younkin, James R; Smith, Steven E

    2016-01-01

    SNAPSHOT is a software platform designed to eventually replace Holdup Measurement System 4 (HMS 4), which is the current state-of-the-art for acquisition and analysis of nondestructive assay measurement data for in situ nuclear materials, holdup, in support of criticality safety and material control and accounting. HMS 4 is over 10 years old and is currently unsustainable due to hardware and software incompatibilities that have arisen from advances in detector electronics, primarily updates to multi-channel analyzers (MCAs), and both computer and handheld operating systems. SNAPSHOT is a complete redesign of HMS 4 that addresses the issue of compatibility with modern MCAsmore » and operating systems and that is designed with a flexible architecture to support long-term sustainability. It also provides an updated and more user friendly interface and is being developed under an NQA 1 software quality assurance (SQA) program to facilitate site acceptance for safety-related applications. This paper provides an overview of the SNAPSHOT project including details of the software development process, the SQA program, and the architecture designed to support sustainability.« less

  3. Basic energy sciences: Summary of accomplishments

    NASA Astrophysics Data System (ADS)

    1990-05-01

    For more than four decades, the Department of Energy, including its predecessor agencies, has supported a program of basic research in nuclear- and energy related sciences, known as Basic Energy Sciences. The purpose of the program is to explore fundamental phenomena, create scientific knowledge, and provide unique user facilities necessary for conducting basic research. Its technical interests span the range of scientific disciplines: physical and biological sciences, geological sciences, engineering, mathematics, and computer sciences. Its products and facilities are essential to technology development in many of the more applied areas of the Department's energy, science, and national defense missions. The accomplishments of Basic Energy Sciences research are numerous and significant. Not only have they contributed to Departmental missions, but have aided significantly the development of technologies which now serve modern society daily in business, industry, science, and medicine. In a series of stories, this report highlights 22 accomplishments, selected because of their particularly noteworthy contributions to modern society. A full accounting of all the accomplishments would be voluminous. Detailed documentation of the research results can be found in many thousands of articles published in peer-reviewed technical literature.

  4. Basic Energy Sciences: Summary of Accomplishments

    DOE R&D Accomplishments Database

    1990-05-01

    For more than four decades, the Department of Energy, including its predecessor agencies, has supported a program of basic research in nuclear- and energy-related sciences, known as Basic Energy Sciences. The purpose of the program is to explore fundamental phenomena, create scientific knowledge, and provide unique user'' facilities necessary for conducting basic research. Its technical interests span the range of scientific disciplines: physical and biological sciences, geological sciences, engineering, mathematics, and computer sciences. Its products and facilities are essential to technology development in many of the more applied areas of the Department's energy, science, and national defense missions. The accomplishments of Basic Energy Sciences research are numerous and significant. Not only have they contributed to Departmental missions, but have aided significantly the development of technologies which now serve modern society daily in business, industry, science, and medicine. In a series of stories, this report highlights 22 accomplishments, selected because of their particularly noteworthy contributions to modern society. A full accounting of all the accomplishments would be voluminous. Detailed documentation of the research results can be found in many thousands of articles published in peer-reviewed technical literature.

  5. Poster — Thur Eve — 74: Distributed, asynchronous, reactive dosimetric and outcomes analysis using DICOMautomaton

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Haley; BC Cancer Agency, Surrey, B.C.; BC Cancer Agency, Vancouver, B.C.

    2014-08-15

    Many have speculated about the future of computational technology in clinical radiation oncology. It has been advocated that the next generation of computational infrastructure will improve on the current generation by incorporating richer aspects of automation, more heavily and seamlessly featuring distributed and parallel computation, and providing more flexibility toward aggregate data analysis. In this report we describe how a recently created — but currently existing — analysis framework (DICOMautomaton) incorporates these aspects. DICOMautomaton supports a variety of use cases but is especially suited for dosimetric outcomes correlation analysis, investigation and comparison of radiotherapy treatment efficacy, and dose-volume computation. Wemore » describe: how it overcomes computational bottlenecks by distributing workload across a network of machines; how modern, asynchronous computational techniques are used to reduce blocking and avoid unnecessary computation; and how issues of out-of-date data are addressed using reactive programming techniques and data dependency chains. We describe internal architecture of the software and give a detailed demonstration of how DICOMautomaton could be used to search for correlations between dosimetric and outcomes data.« less

  6. QMachine: commodity supercomputing in web browsers

    PubMed Central

    2014-01-01

    Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less

  8. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 5: Unsteady counterrotation ducted propfan analysis. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Adamczyk, John J.; Miller, Christopher J.; Arnone, Andrea; Swanson, Charles

    1993-01-01

    The primary objective of this study was the development of a time-marching three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict steady and unsteady compressible transonic flows about ducted and unducted propfan propulsion systems employing multiple blade rows. The computer codes resulting from this study are referred to as ADPAC-AOACR (Advanced Ducted Propfan Analysis Codes-Angle of Attack Coupled Row). This report is intended to serve as a computer program user's manual for the ADPAC-AOACR codes developed under Task 5 of NASA Contract NAS3-25270, Unsteady Counterrotating Ducted Propfan Analysis. The ADPAC-AOACR program is based on a flexible multiple blocked grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. For convenience, several standard mesh block structures are described for turbomachinery applications. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Numerical calculations are compared with experimental data for several test cases to demonstrate the utility of this approach for predicting the aerodynamics of modern turbomachinery configurations employing multiple blade rows.

  9. Logistics Modernization Program System Procure-to-Pay Process Did Not Correct Material Weaknesses

    DTIC Science & Technology

    2012-05-29

    Prevalidation of DOD Commercial Payments,” March 2, 2007 Army U.S. Army Audit Agency Report No. A-2007-0205- FFM , “Logistics Modernization Program...Report No. A-2007-0163- FFM , “FY 03–FY 05 Obligations Recorded in the Logistics Modernization Program,” July 27, 2007 U.S. Army Audit Agency Report No

  10. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  11. Improvements to the adaptive maneuvering logic program

    NASA Technical Reports Server (NTRS)

    Burgin, George H.

    1986-01-01

    The Adaptive Maneuvering Logic (AML) computer program simulates close-in, one-on-one air-to-air combat between two fighter aircraft. Three important improvements are described. First, the previously available versions of AML were examined for their suitability as a baseline program. The selected program was then revised to eliminate some programming bugs which were uncovered over the years. A listing of this baseline program is included. Second, the equations governing the motion of the aircraft were completely revised. This resulted in a model with substantially higher fidelity than the original equations of motion provided. It also completely eliminated the over-the-top problem, which occurred in the older versions when the AML-driven aircraft attempted a vertical or near vertical loop. Third, the requirements for a versatile generic, yet realistic, aircraft model were studied and implemented in the program. The report contains detailed tables which make the generic aircraft to be either a modern, high performance aircraft, an older high performance aircraft, or a previous generation jet fighter.

  12. The community FabLab platform: applications and implications in biomedical engineering.

    PubMed

    Stephenson, Makeda K; Dow, Douglas E

    2014-01-01

    Skill development in science, technology, engineering and math (STEM) education present one of the most formidable challenges of modern society. The Community FabLab platform presents a viable solution. Each FabLab contains a suite of modern computer numerical control (CNC) equipment, electronics and computing hardware and design, programming, computer aided design (CAD) and computer aided machining (CAM) software. FabLabs are community and educational resources and open to the public. Development of STEM based workforce skills such as digital fabrication and advanced manufacturing can be enhanced using this platform. Particularly notable is the potential of the FabLab platform in STEM education. The active learning environment engages and supports a diversity of learners, while the iterative learning that is supported by the FabLab rapid prototyping platform facilitates depth of understanding, creativity, innovation and mastery. The product and project based learning that occurs in FabLabs develops in the student a personal sense of accomplishment, self-awareness, command of the material and technology. This helps build the interest and confidence necessary to excel in STEM and throughout life. Finally the introduction and use of relevant technologies at every stage of the education process ensures technical familiarity and a broad knowledge base needed for work in STEM based fields. Biomedical engineering education strives to cultivate broad technical adeptness, creativity, interdisciplinary thought, and an ability to form deep conceptual understanding of complex systems. The FabLab platform is well designed to enhance biomedical engineering education.

  13. ECG R-R peak detection on mobile phones.

    PubMed

    Sufi, F; Fang, Q; Cosic, I

    2007-01-01

    Mobile phones have become an integral part of modern life. Due to the ever increasing processing power, mobile phones are rapidly expanding its arena from a sole device of telecommunication to organizer, calculator, gaming device, web browser, music player, audio/video recording device, navigator etc. The processing power of modern mobile phones has been utilized by many innovative purposes. In this paper, we are proposing the utilization of mobile phones for monitoring and analysis of biosignal. The computation performed inside the mobile phone's processor will now be exploited for healthcare delivery. We performed literature review on RR interval detection from ECG and selected few PC based algorithms. Then, three of those existing RR interval detection algorithms were programmed on Java platform. Performance monitoring and comparison studies were carried out on three different mobile devices to determine their application on a realtime telemonitoring scenario.

  14. The role of modern control theory in the design of controls for aircraft turbine engines

    NASA Technical Reports Server (NTRS)

    Zeller, J.; Lehtinen, B.; Merrill, W.

    1982-01-01

    The development, applications, and current research in modern control theory (MCT) are reviewed, noting the importance for fuel-efficient operation of turbines with variable inlet guide vanes, compressor stators, and exhaust nozzle area. The evolution of multivariable propulsion control design is examined, noting a basis in a matrix formulation of the differential equations defining the process, leading to state space formulations. Reports and papers which appeared from 1970-1982 which dealt with problems in MCT applications to turbine engine control design are outlined, including works on linear quadratic regulator methods, frequency domain methods, identification, estimation, and model reduction, detection, isolation, and accommodation, and state space control, adaptive control, and optimization approaches. Finally, NASA programs in frequency domain design, sensor failure detection, computer-aided control design, and plant modeling are explored

  15. Light Water Reactor Sustainability Program A Reference Plan for Control Room Modernization: Planning and Analysis Phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques Hugo; Ronald Boring; Lew Hanes

    2013-09-01

    The U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) program is collaborating with a U.S. nuclear utility to bring about a systematic fleet-wide control room modernization. To facilitate this upgrade, a new distributed control system (DCS) is being introduced into the control rooms of these plants. The DCS will upgrade the legacy plant process computer and emergency response facility information system. In addition, the DCS will replace an existing analog turbine control system with a display-based system. With technology upgrades comes the opportunity to improve the overall human-system interaction between the operators and the control room. To optimize operatormore » performance, the LWRS Control Room Modernization research team followed a human-centered approach published by the U.S. Nuclear Regulatory Commission. NUREG-0711, Rev. 3, Human Factors Engineering Program Review Model (O’Hara et al., 2012), prescribes four phases for human factors engineering. This report provides examples of the first phase, Planning and Analysis. The three elements of Planning and Analysis in NUREG-0711 that are most crucial to initiating control room upgrades are: • Operating Experience Review: Identifies opportunities for improvement in the existing system and provides lessons learned from implemented systems. • Function Analysis and Allocation: Identifies which functions at the plant may be optimally handled by the DCS vs. the operators. • Task Analysis: Identifies how tasks might be optimized for the operators. Each of these elements is covered in a separate chapter. Examples are drawn from workshops with reactor operators that were conducted at the LWRS Human System Simulation Laboratory HSSL and at the respective plants. The findings in this report represent generalized accounts of more detailed proprietary reports produced for the utility for each plant. The goal of this LWRS report is to disseminate the technique and provide examples sufficient to serve as a template for other utilities’ projects for control room modernization.« less

  16. Optimizing Engineering Tools Using Modern Ground Architectures

    DTIC Science & Technology

    2017-12-01

    Considerations,” International Journal of Computer Science & Engineering Survey , vol. 5, no. 4, 2014. [10] R. Bell. (n.d). A beginner’s guide to big O notation...scientific community. Traditional computing architectures were not capable of processing the data efficiently, or in some cases, could not process the...thesis investigates how these modern computing architectures could be leveraged by industry and academia to improve the performance and capabilities of

  17. Parametric (On-Design) Cycle Analysis for a Separate-Exhaust Turbofan Engine With Interstage Turbine Burner

    NASA Technical Reports Server (NTRS)

    Liew, K. H.; Urip, E.; Yang, S. L.; Siow, Y. K.; Marek, C. J.

    2005-01-01

    Today s modern aircraft is based on air-breathing jet propulsion systems, which use moving fluids as substances to transform energy carried by the fluids into power. Throughout aero-vehicle evolution, improvements have been made to the engine efficiency and pollutants reduction. The major advantages associated with the addition of ITB are an increase in thermal efficiency and reduction in NOx emission. Lower temperature peak in the main combustor results in lower thermal NOx emission and lower amount of cooling air required. This study focuses on a parametric (on-design) cycle analysis of a dual-spool, separate-flow turbofan engine with an Interstage Turbine Burner (ITB). The ITB considered in this paper is a relatively new concept in modern jet engine propulsion. The ITB serves as a secondary combustor and is located between the high- and the low-pressure turbine, i.e., the transition duct. The objective of this study is to use design parameters, such as flight Mach number, compressor pressure ratio, fan pressure ratio, fan bypass ratio, and high-pressure turbine inlet temperature to obtain engine performance parameters, such as specific thrust and thrust specific fuel consumption. Results of this study can provide guidance in identifying the performance characteristics of various engine components, which can then be used to develop, analyze, integrate, and optimize the system performance of turbofan engines with an ITB. Visual Basic program, Microsoft Excel macrocode, and Microsoft Excel neuron code are used to facilitate Microsoft Excel software to plot engine performance versus engine design parameters. This program computes and plots the data sequentially without forcing users to open other types of plotting programs. A user s manual on how to use the program is also included in this report. Furthermore, this stand-alone program is written in conjunction with an off-design program which is an extension of this study. The computed result of a selected design-point engine will be exported to an engine reference data file that is required in off-design calculation.

  18. New technologies applied to family history: a particular case of southern Europe in the eighteenth century.

    PubMed

    García, Manuel Pérez

    2011-01-01

    In this article, the author explains how the support of new technologies has helped historians to develop their research over the last few decades. The author, therefore, summarizes the application of both database and genealogical programs for the southern Europe family studies as a methodological tool. First, the author will establish the importance of the creation of databases using the File Maker program, after which they will explain the value of using genealogical programs such as Genopro and Heredis. The main aim of this article is to give detail about the use of these new technologies as applied to a particular study of southern Europe, specifically the Crown of Castile, during the late modern period. The use of these computer programs has helped to develop the field of social sciences and family history, in particular, social history, during the last decade.

  19. Resources and Approaches for Teaching Quantitative and Computational Skills in the Geosciences and Allied Fields

    NASA Astrophysics Data System (ADS)

    Orr, C. H.; Mcfadden, R. R.; Manduca, C. A.; Kempler, L. A.

    2016-12-01

    Teaching with data, simulations, and models in the geosciences can increase many facets of student success in the classroom, and in the workforce. Teaching undergraduates about programming and improving students' quantitative and computational skills expands their perception of Geoscience beyond field-based studies. Processing data and developing quantitative models are critically important for Geoscience students. Students need to be able to perform calculations, analyze data, create numerical models and visualizations, and more deeply understand complex systems—all essential aspects of modern science. These skills require students to have comfort and skill with languages and tools such as MATLAB. To achieve comfort and skill, computational and quantitative thinking must build over a 4-year degree program across courses and disciplines. However, in courses focused on Geoscience content it can be challenging to get students comfortable with using computational methods to answers Geoscience questions. To help bridge this gap, we have partnered with MathWorks to develop two workshops focused on collecting and developing strategies and resources to help faculty teach students to incorporate data, simulations, and models into the curriculum at the course and program levels. We brought together faculty members from the sciences, including Geoscience and allied fields, who teach computation and quantitative thinking skills using MATLAB to build a resource collection for teaching. These materials, and the outcomes of the workshops are freely available on our website. The workshop outcomes include a collection of teaching activities, essays, and course descriptions that can help faculty incorporate computational skills at the course or program level. The teaching activities include in-class assignments, problem sets, labs, projects, and toolboxes. These activities range from programming assignments to creating and using models. The outcomes also include workshop syntheses that highlights best practices, a set of webpages to support teaching with software such as MATLAB, and an interest group actively discussing aspects these issues in Geoscience and allied fields. Learn more and view the resources at http://serc.carleton.edu/matlab_computation2016/index.html

  20. Concepts and algorithms for terminal-area traffic management

    NASA Technical Reports Server (NTRS)

    Erzberger, H.; Chapel, J. D.

    1984-01-01

    The nation's air-traffic-control system is the subject of an extensive modernization program, including the planned introduction of advanced automation techniques. This paper gives an overview of a concept for automating terminal-area traffic management. Four-dimensional (4D) guidance techniques, which play an essential role in the automated system, are reviewed. One technique, intended for on-board computer implementation, is based on application of optimal control theory. The second technique is a simplified approach to 4D guidance intended for ground computer implementation. It generates advisory messages to help the controller maintain scheduled landing times of aircraft not equipped with on-board 4D guidance systems. An operational system for the second technique, recently evaluated in a simulation, is also described.

  1. Visualization and Interaction in Research, Teaching, and Scientific Communication

    NASA Astrophysics Data System (ADS)

    Ammon, C. J.

    2017-12-01

    Modern computing provides many tools for exploring observations, numerical calculations, and theoretical relationships. The number of options is, in fact, almost overwhelming. But the choices provide those with modest programming skills opportunities to create unique views of scientific information and to develop deeper insights into their data, their computations, and the underlying theoretical data-model relationships. I present simple examples of using animation and human-computer interaction to explore scientific data and scientific-analysis approaches. I illustrate how valuable a little programming ability can free scientists from the constraints of existing tools and can facilitate the development of deeper appreciation data and models. I present examples from a suite of programming languages ranging from C to JavaScript including the Wolfram Language. JavaScript is valuable for sharing tools and insight (hopefully) with others because it is integrated into one of the most powerful communication tools in human history, the web browser. Although too much of that power is often spent on distracting advertisements, the underlying computation and graphics engines are efficient, flexible, and almost universally available in desktop and mobile computing platforms. Many are working to fulfill the browser's potential to become the most effective tool for interactive study. Open-source frameworks for visualizing everything from algorithms to data are available, but advance rapidly. One strategy for dealing with swiftly changing tools is to adopt common, open data formats that are easily adapted (often by framework or tool developers). I illustrate the use of animation and interaction in research and teaching with examples from earthquake seismology.

  2. The Center for Computational Biology: resources, achievements, and challenges

    PubMed Central

    Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2011-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221

  3. The Center for Computational Biology: resources, achievements, and challenges.

    PubMed

    Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott

    2012-01-01

    The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.

  4. Industrial Technology Modernization Program. Project 28. Automation of Receiving, Receiving Inspection and Stores

    DTIC Science & Technology

    1987-06-15

    001 GENERAL DYNAMICS 00 FORT WORTH DIVISION INDUSTRIAL TECHNOLOGY MODERNIZATION PROGRAM Phase 2 Final Project Repc t JUNG 0 ?7 PROJECT 28 AUTOMATION...DYNAMICS FORT WORTH DIVISION INDUSTRIAL TECHNOLOGY MODERNIZATION PROGRAM Phase 2 Final Project Report PROJECT 28 AUTOMATION OF RECEIVING, RECEIVING...13 6 PROJECT ASSUMPTIONS 20 7 PRELIMINARY/FINAL DESIGN AND FINDINGS 21 8 SYSTEM/EQUIPMENT/MACHINING SPECIFICATIONS 37 9 VENDOR/ INDUSTRY ANALYSIS

  5. Discovering H-bonding rules in crystals with inductive logic programming.

    PubMed

    Ando, Howard Y; Dehaspe, Luc; Luyten, Walter; Van Craenenbroeck, Elke; Vandecasteele, Henk; Van Meervelt, Luc

    2006-01-01

    In the domain of crystal engineering, various schemes have been proposed for the classification of hydrogen bonding (H-bonding) patterns observed in 3D crystal structures. In this study, the aim is to complement these schemes with rules that predict H-bonding in crystals from 2D structural information only. Modern computational power and the advances in inductive logic programming (ILP) can now provide computational chemistry with the opportunity for extracting structure-specific rules from large databases that can be incorporated into expert systems. ILP technology is here applied to H-bonding in crystals to develop a self-extracting expert system utilizing data in the Cambridge Structural Database of small molecule crystal structures. A clear increase in performance was observed when the ILP system DMax was allowed to refer to the local structural environment of the possible H-bond donor/acceptor pairs. This ability distinguishes ILP from more traditional approaches that build rules on the basis of global molecular properties.

  6. A remote sensing computer-assisted learning tool developed using the unified modeling language

    NASA Astrophysics Data System (ADS)

    Friedrich, J.; Karslioglu, M. O.

    The goal of this work has been to create an easy-to-use and simple-to-make learning tool for remote sensing at an introductory level. Many students struggle to comprehend what seems to be a very basic knowledge of digital images, image processing and image arithmetic, for example. Because professional programs are generally too complex and overwhelming for beginners and often not tailored to the specific needs of a course regarding functionality, a computer-assisted learning (CAL) program was developed based on the unified modeling language (UML), the present standard for object-oriented (OO) system development. A major advantage of this approach is an easier transition from modeling to coding of such an application, if modern UML tools are being used. After introducing the constructed UML model, its implementation is briefly described followed by a series of learning exercises. They illustrate how the resulting CAL tool supports students taking an introductory course in remote sensing at the author's institution.

  7. PandaEPL: a library for programming spatial navigation experiments.

    PubMed

    Solway, Alec; Miller, Jonathan F; Kahana, Michael J

    2013-12-01

    Recent advances in neuroimaging and neural recording techniques have enabled researchers to make significant progress in understanding the neural mechanisms underlying human spatial navigation. Because these techniques generally require participants to remain stationary, computer-generated virtual environments are used. We introduce PandaEPL, a programming library for the Python language designed to simplify the creation of computer-controlled spatial-navigation experiments. PandaEPL is built on top of Panda3D, a modern open-source game engine. It allows users to construct three-dimensional environments that participants can navigate from a first-person perspective. Sound playback and recording and also joystick support are provided through the use of additional optional libraries. PandaEPL also handles many tasks common to all cognitive experiments, including managing configuration files, logging all internal and participant-generated events, and keeping track of the experiment state. We describe how PandaEPL compares with other software for building spatial-navigation experiments and walk the reader through the process of creating a fully functional experiment.

  8. PandaEPL: A library for programming spatial navigation experiments

    PubMed Central

    Solway, Alec; Miller, Jonathan F.

    2013-01-01

    Recent advances in neuroimaging and neural recording techniques have enabled researchers to make significant progress in understanding the neural mechanisms underlying human spatial navigation. Because these techniques generally require participants to remain stationary, computer-generated virtual environments are used. We introduce PandaEPL, a programming library for the Python language designed to simplify the creation of computer-controlled spatial-navigation experiments. PandaEPL is built on top of Panda3D, a modern open-source game engine. It allows users to construct three-dimensional environments that participants can navigate from a first-person perspective. Sound playback and recording and also joystick support are provided through the use of additional optional libraries. PandaEPL also handles many tasks common to all cognitive experiments, including managing configuration files, logging all internal and participant-generated events, and keeping track of the experiment state. We describe how PandaEPL compares with other software for building spatial-navigation experiments and walk the reader through the process of creating a fully functional experiment. PMID:23549683

  9. Design and implementation of a cloud based lithography illumination pupil processing application

    NASA Astrophysics Data System (ADS)

    Zhang, Youbao; Ma, Xinghua; Zhu, Jing; Zhang, Fang; Huang, Huijie

    2017-02-01

    Pupil parameters are important parameters to evaluate the quality of lithography illumination system. In this paper, a cloud based full-featured pupil processing application is implemented. A web browser is used for the UI (User Interface), the websocket protocol and JSON format are used for the communication between the client and the server, and the computing part is implemented in the server side, where the application integrated a variety of high quality professional libraries, such as image processing libraries libvips and ImageMagic, automatic reporting system latex, etc., to support the program. The cloud based framework takes advantage of server's superior computing power and rich software collections, and the program could run anywhere there is a modern browser due to its web UI design. Compared to the traditional way of software operation model: purchased, licensed, shipped, downloaded, installed, maintained, and upgraded, the new cloud based approach, which is no installation, easy to use and maintenance, opens up a new way. Cloud based application probably is the future of the software development.

  10. Adapting bioinformatics curricula for big data.

    PubMed

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.

  11. Adapting bioinformatics curricula for big data

    PubMed Central

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  12. SimpleITK Image-Analysis Notebooks: a Collaborative Environment for Education and Reproducible Research.

    PubMed

    Yaniv, Ziv; Lowekamp, Bradley C; Johnson, Hans J; Beare, Richard

    2018-06-01

    Modern scientific endeavors increasingly require team collaborations to construct and interpret complex computational workflows. This work describes an image-analysis environment that supports the use of computational tools that facilitate reproducible research and support scientists with varying levels of software development skills. The Jupyter notebook web application is the basis of an environment that enables flexible, well-documented, and reproducible workflows via literate programming. Image-analysis software development is made accessible to scientists with varying levels of programming experience via the use of the SimpleITK toolkit, a simplified interface to the Insight Segmentation and Registration Toolkit. Additional features of the development environment include user friendly data sharing using online data repositories and a testing framework that facilitates code maintenance. SimpleITK provides a large number of examples illustrating educational and research-oriented image analysis workflows for free download from GitHub under an Apache 2.0 license: github.com/InsightSoftwareConsortium/SimpleITK-Notebooks .

  13. dREL: a relational expression language for dictionary methods.

    PubMed

    Spadaccini, Nick; Castleden, Ian R; du Boulay, Doug; Hall, Sydney R

    2012-08-27

    The provision of precise metadata is an important but a largely underrated challenge for modern science [Nature 2009, 461, 145]. We describe here a dictionary methods language dREL that has been designed to enable complex data relationships to be expressed as formulaic scripts in data dictionaries written in DDLm [Spadaccini and Hall J. Chem. Inf. Model.2012 doi:10.1021/ci300075z]. dREL describes data relationships in a simple but powerful canonical form that is easy to read and understand and can be executed computationally to evaluate or validate data. The execution of dREL expressions is not a substitute for traditional scientific computation; it is to provide precise data dependency information to domain-specific definitions and a means for cross-validating data. Some scientific fields apply conventional programming languages to methods scripts but these tend to inhibit both dictionary development and accessibility. dREL removes the programming barrier and encourages the production of the metadata needed for seamless data archiving and exchange in science.

  14. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  15. Knowledge-based computer systems for radiotherapy planning.

    PubMed

    Kalet, I J; Paluszynski, W

    1990-08-01

    Radiation therapy is one of the first areas of clinical medicine to utilize computers in support of routine clinical decision making. The role of the computer has evolved from simple dose calculations to elaborate interactive graphic three-dimensional simulations. These simulations can combine external irradiation from megavoltage photons, electrons, and particle beams with interstitial and intracavitary sources. With the flexibility and power of modern radiotherapy equipment and the ability of computer programs that simulate anything the machinery can do, we now face a challenge to utilize this capability to design more effective radiation treatments. How can we manage the increased complexity of sophisticated treatment planning? A promising approach will be to use artificial intelligence techniques to systematize our present knowledge about design of treatment plans, and to provide a framework for developing new treatment strategies. Far from replacing the physician, physicist, or dosimetrist, artificial intelligence-based software tools can assist the treatment planning team in producing more powerful and effective treatment plans. Research in progress using knowledge-based (AI) programming in treatment planning already has indicated the usefulness of such concepts as rule-based reasoning, hierarchical organization of knowledge, and reasoning from prototypes. Problems to be solved include how to handle continuously varying parameters and how to evaluate plans in order to direct improvements.

  16. Accessible Earth: Enhancing diversity in the Geosciences through accessible course design

    NASA Astrophysics Data System (ADS)

    Bennett, R. A.; Lamb, D. A.

    2017-12-01

    The tradition of field-based instruction in the geoscience curriculum, which culminates in a capstone geological field camp, presents an insurmountable barrier to many disabled students who might otherwise choose to pursue geoscience careers. There is a widespread perception that success as a practicing geoscientist requires direct access to outcrops and vantage points available only to those able to traverse inaccessible terrain. Yet many modern geoscience activities are based on remotely sensed geophysical data, data analysis, and computation that take place entirely from within the laboratory. To challenge the perception of geoscience as a career option only for the non-disabled, we have created the capstone Accessible Earth Study Abroad Program, an alternative to geologic field camp for all students, with a focus on modern geophysical observation systems, computational thinking, data science, and professional development.In this presentation, we will review common pedagogical approaches in geosciences and current efforts to make the field more inclusive. We will review curricular access and inclusivity relative to a wide range of learners and provide examples of accessible course design based on our experiences in teaching a study abroad course in central Italy, and our plans for ongoing assessment, refinement, and dissemination of the effectiveness of our efforts.

  17. Modern & Classical Languages: K-12 Program EValuation 1988-89.

    ERIC Educational Resources Information Center

    Martinez, Margaret Perea

    This evaluation of the modern and classical languages programs, K-12, in the Albuquerque (New Mexico) public school system provides general information on the program's history, philosophy, recognition, curriculum development, teachers, and activities. Specific information is offered on the different program components, namely, the elementary…

  18. Vectorization with SIMD extensions speeds up reconstruction in electron tomography.

    PubMed

    Agulleiro, J I; Garzón, E M; García, I; Fernández, J J

    2010-06-01

    Electron tomography allows structural studies of cellular structures at molecular detail. Large 3D reconstructions are needed to meet the resolution requirements. The processing time to compute these large volumes may be considerable and so, high performance computing techniques have been used traditionally. This work presents a vector approach to tomographic reconstruction that relies on the exploitation of the SIMD extensions available in modern processors in combination to other single processor optimization techniques. This approach succeeds in producing full resolution tomograms with an important reduction in processing time, as evaluated with the most common reconstruction algorithms, namely WBP and SIRT. The main advantage stems from the fact that this approach is to be run on standard computers without the need of specialized hardware, which facilitates the development, use and management of programs. Future trends in processor design open excellent opportunities for vector processing with processor's SIMD extensions in the field of 3D electron microscopy.

  19. Building a Data Science capability for USGS water research and communication

    NASA Astrophysics Data System (ADS)

    Appling, A.; Read, E. K.

    2015-12-01

    Interpreting and communicating water issues in an era of exponentially increasing information requires a blend of domain expertise, computational proficiency, and communication skills. The USGS Office of Water Information has established a Data Science team to meet these needs, providing challenging careers for diverse domain scientists and innovators in the fields of information technology and data visualization. Here, we detail the experience of building a Data Science capability as a bridging element between traditional water resources analyses and modern computing tools and data management techniques. This approach includes four major components: 1) building reusable research tools, 2) documenting data-intensive research approaches in peer reviewed journals, 3) communicating complex water resources issues with interactive web visualizations, and 4) offering training programs for our peers in scientific computing. These components collectively improve the efficiency, transparency, and reproducibility of USGS data analyses and scientific workflows.

  20. Current Lewis Turbomachinery Research: Building on our Legacy of Excellence

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1997-01-01

    This Wu Chang-Hua lecture is concerned with the development of analysis and computational capability for turbomachinery flows which is based on detailed flow field physics. A brief review of the work of Professor Wu is presented as well as a summary of the current NASA aeropropulsion programs. Two major areas of research are described in order to determine our predictive capabilities using modern day computational tools evolved from the work of Professor Wu. In one of these areas, namely transonic rotor flow, it is demonstrated that a high level of accuracy is obtainable provided sufficient geometric detail is simulated. In the second case, namely turbine heat transfer, our capability is lacking for rotating blade rows and experimental correlations will provide needed information in the near term. It is believed that continuing progress will allow us to realize the full computational potential and its impact on design time and cost.

  1. GPU accelerated implementation of NCI calculations using promolecular density.

    PubMed

    Rubez, Gaëtan; Etancelin, Jean-Matthieu; Vigouroux, Xavier; Krajecki, Michael; Boisson, Jean-Charles; Hénon, Eric

    2017-05-30

    The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand-protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual-GPU version leads to a 39-fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Modern Computational Techniques for the HMMER Sequence Analysis

    PubMed Central

    2013-01-01

    This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional software approach and innovated hardware acceleration technologies. PMID:25937944

  3. Modelling decision-making by pilots

    NASA Technical Reports Server (NTRS)

    Patrick, Nicholas J. M.

    1993-01-01

    Our scientific goal is to understand the process of human decision-making. Specifically, a model of human decision-making in piloting modern commercial aircraft which prescribes optimal behavior, and against which we can measure human sub-optimality is sought. This model should help us understand such diverse aspects of piloting as strategic decision-making, and the implicit decisions involved in attention allocation. Our engineering goal is to provide design specifications for (1) better computer-based decision-aids, and (2) better training programs for the human pilot (or human decision-maker, DM).

  4. A Study of Venezuela’s Internal and External Threat and the United States Security Assistance Program in the Build-Up and Modernization of Her Forces

    DTIC Science & Technology

    1984-09-01

    records/documents in the division *- are quantitative in nature and are contained in computer printouts. Other documents include daily transactions docu...1498-1830). Venezuela was discovered by Christopher Columbus during his third voyage to the new world in 1498 (25:2). He was intrigued by pearl...maintain control, trade with other countries was forbidden and resulted in the colony’s isolation for most of *O its early existence. News of the new

  5. Methods of mathematical modeling using polynomials of algebra of sets

    NASA Astrophysics Data System (ADS)

    Kazanskiy, Alexandr; Kochetkov, Ivan

    2018-03-01

    The article deals with the construction of discrete mathematical models for solving applied problems arising from the operation of building structures. Security issues in modern high-rise buildings are extremely serious and relevant, and there is no doubt that interest in them will only increase. The territory of the building is divided into zones for which it is necessary to observe. Zones can overlap and have different priorities. Such situations can be described using formulas algebra of sets. Formulas can be programmed, which makes it possible to work with them using computer models.

  6. An Object-Oriented Computer Aided Design Program for Modern Control Systems Analysis

    DTIC Science & Technology

    1992-12-01

    First Operand: LUmat[ 2, 1] 0.5000 1.0000 Second Operand: BMat [ 1] 1.0000 1.0000 Result 0.5000 1.0000 *** Subtraction *** First Operand 0.0000...Appendix B - 33 Second Operand: BMat [ 2] 2 -1.0000 ( s + 3.0000s + 2.0000 s - 0.2000 Result -0.2000 s - 0.2000 * Subtraction *** First Operand 1.0000...Multiplication *** First Operand: LUmat[ 1, 2] 0.2000 2 s + 3.0000s + 2.0000 Second Operand: BMat 1 2] 2 2.0000 ( s + 3.0000s

  7. A Case Study in Flight Computer Software Redesign

    NASA Astrophysics Data System (ADS)

    Shimoni, R.; Ben-Zur, Y.

    2004-06-01

    Historically many real-time systems were developed using technologies that are now obsolete. There is a need for upgrading these systems. A good development process is essential to achieve a well-designed software product. We, at MLM, a subsidary of Israel Aircraft Industries, faced a similar situation in the Flight Mission Computer (Main Airborne Computer-MAC) of the SHAVIT launcher. It was necessary to upgrade the computer hardware and we decided to update the software as well. During the last two years, we have designed and implemented and new version of the MAC software, to be run on a new and stronger target platform. We undertook to create a new version of the MAC program using modern software development techniques. The process included Object-Oriented design using a CASE tool suitable for embedded real-time systems. We have partially implemented the ROPES development process. In this article we present the difficulties and challenges we faced in the software development process.

  8. Digital phonocardiographic experiments and signal processing in multidisciplinary fields of university education

    NASA Astrophysics Data System (ADS)

    Nagy, Tamás; Vadai, Gergely; Gingl, Zoltán

    2017-09-01

    Modern measurement of physical signals is based on the use of sensors, electronic signal conditioning, analog-to-digital conversion and digital signal processing carried out by dedicated software. The same signal chain is used in many devices such as home appliances, automotive electronics, medical instruments, and smartphones. Teaching the theoretical, experimental, and signal processing background must be an essential part of improving the standard of higher education, and it fits well to the increasingly multidisciplinary nature of physics and engineering too. In this paper, we show how digital phonocardiography can be used in university education as a universal, highly scalable, exciting, and inspiring laboratory practice and as a demonstration at various levels and complexity. We have developed open-source software templates in modern programming languages to support immediate use and to serve as a basis of further modifications using personal computers, tablets, and smartphones.

  9. ASSET Queries: A Set-Oriented and Column-Wise Approach to Modern OLAP

    NASA Astrophysics Data System (ADS)

    Chatziantoniou, Damianos; Sotiropoulos, Yannis

    Modern data analysis has given birth to numerous grouping constructs and programming paradigms, way beyond the traditional group by. Applications such as data warehousing, web log analysis, streams monitoring and social networks understanding necessitated the use of data cubes, grouping variables, windows and MapReduce. In this paper we review the associated set (ASSET) concept and discuss its applicability in both continuous and traditional data settings. Given a set of values B, an associated set over B is just a collection of annotated data multisets, one for each b(B. The goal is to efficiently compute aggregates over these data sets. An ASSET query consists of repeated definitions of associated sets and aggregates of these, possibly correlated, resembling a spreadsheet document. We review systems implementing ASSET queries both in continuous and persistent contexts and argue for associated sets' analytical abilities and optimization opportunities.

  10. Research-oriented teaching in optical design course and its function in education

    NASA Astrophysics Data System (ADS)

    Cen, Zhaofeng; Li, Xiaotong; Liu, Xiangdong; Deng, Shitao

    2008-03-01

    The principles and operation plans of research-oriented teaching in the course of computer aided optical design are presented, especially the mode of research in practice course. This program includes contract definition phase, project organization and execution, post project evaluation and discussion. Modes of academic organization are used in the practice course of computer aided optical design. In this course the students complete their design projects in research teams by autonomous group approach and cooperative exploration. In this research process they experience the interpersonal relationship in modern society, the importance of cooperation in team, the functions of each individual, the relationships between team members, the competition and cooperation in one academic group and with other groups, and know themselves objectively. In the design practice the knowledge of many academic fields is applied including applied optics, computer programming, engineering software and etc. The characteristic of interdisciplinary is very useful for academic research and makes the students be ready for innovation by integrating the knowledge of interdisciplinary field. As shown by the practice that this teaching mode has taken very important part in bringing up the abilities of engineering, cooperation, digesting the knowledge at a high level and problem analyzing and solving.

  11. cudaMap: a GPU accelerated program for gene expression connectivity mapping.

    PubMed

    McArt, Darragh G; Bankhead, Peter; Dunne, Philip D; Salto-Tellez, Manuel; Hamilton, Peter; Zhang, Shu-Dong

    2013-10-11

    Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Emerging 'omics' technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap.

  12. School Buildings: Remodeling; Rehabilitation; Modernization; Repair. Bulletin, 1950, No. 17

    ERIC Educational Resources Information Center

    Yiles, Nelson E.

    1950-01-01

    Adequate school plants are essential to a modern educational program. The school plant that is not properly maintained soon fails to provide the service for which it was intended. The total program of maintenance, including repairs, renovation, remodeling, rehabilitation, and modernization should be carefully planned. Some tasks will recur at…

  13. CCP4i2: the new graphical user interface to the CCP4 program suite.

    PubMed

    Potterton, Liz; Agirre, Jon; Ballard, Charles; Cowtan, Kevin; Dodson, Eleanor; Evans, Phil R; Jenkins, Huw T; Keegan, Ronan; Krissinel, Eugene; Stevenson, Kyle; Lebedev, Andrey; McNicholas, Stuart J; Nicholls, Robert A; Noble, Martin; Pannu, Navraj S; Roth, Christian; Sheldrick, George; Skubak, Pavol; Turkenburg, Johan; Uski, Ville; von Delft, Frank; Waterman, David; Wilson, Keith; Winn, Martyn; Wojdyr, Marcin

    2018-02-01

    The CCP4 (Collaborative Computational Project, Number 4) software suite for macromolecular structure determination by X-ray crystallography groups brings together many programs and libraries that, by means of well established conventions, interoperate effectively without adhering to strict design guidelines. Because of this inherent flexibility, users are often presented with diverse, even divergent, choices for solving every type of problem. Recently, CCP4 introduced CCP4i2, a modern graphical interface designed to help structural biologists to navigate the process of structure determination, with an emphasis on pipelining and the streamlined presentation of results. In addition, CCP4i2 provides a framework for writing structure-solution scripts that can be built up incrementally to create increasingly automatic procedures.

  14. Openlobby: an open game server for lobby and matchmaking

    NASA Astrophysics Data System (ADS)

    Zamzami, E. M.; Tarigan, J. T.; Jaya, I.; Hardi, S. M.

    2018-03-01

    Online Multiplayer is one of the most essential feature in modern games. However, while developing a multiplayer feature can be done with a simple computer networking programming, creating a balanced multiplayer session requires more player management components such as game lobby and matchmaking system. Our objective is to develop OpenLobby, a server that available to be used by other developers to support their multiplayer application. The proposed system acts as a lobby and matchmaker where queueing players will be matched to other player according to a certain criteria defined by developer. The solution provides an application programing interface that can be used by developer to interact with the server. For testing purpose, we developed a game that uses the server as their multiplayer server.

  15. Advances in analytical chemistry

    NASA Technical Reports Server (NTRS)

    Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.

    1991-01-01

    Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.

  16. Bonsai: an event-based framework for processing and controlling data streams

    PubMed Central

    Lopes, Gonçalo; Bonacchi, Niccolò; Frazão, João; Neto, Joana P.; Atallah, Bassam V.; Soares, Sofia; Moreira, Luís; Matias, Sara; Itskov, Pavel M.; Correia, Patrícia A.; Medina, Roberto E.; Calcaterra, Lorenza; Dreosti, Elena; Paton, Joseph J.; Kampff, Adam R.

    2015-01-01

    The design of modern scientific experiments requires the control and monitoring of many different data streams. However, the serial execution of programming instructions in a computer makes it a challenge to develop software that can deal with the asynchronous, parallel nature of scientific data. Here we present Bonsai, a modular, high-performance, open-source visual programming framework for the acquisition and online processing of data streams. We describe Bonsai's core principles and architecture and demonstrate how it allows for the rapid and flexible prototyping of integrated experimental designs in neuroscience. We specifically highlight some applications that require the combination of many different hardware and software components, including video tracking of behavior, electrophysiology and closed-loop control of stimulation. PMID:25904861

  17. Securing Secrets and Managing Trust in Modern Computing Applications

    ERIC Educational Resources Information Center

    Sayler, Andy

    2016-01-01

    The amount of digital data generated and stored by users increases every day. In order to protect this data, modern computing systems employ numerous cryptographic and access control solutions. Almost all of such solutions, however, require the keeping of certain secrets as the basis of their security models. How best to securely store and control…

  18. Implementation and Evaluation of Flipped Classroom as IoT Element into Learning Process of Computer Network Education

    ERIC Educational Resources Information Center

    Zhamanov, Azamat; Yoo, Seong-Moo; Sakhiyeva, Zhulduz; Zhaparov, Meirambek

    2018-01-01

    Students nowadays are hard to be motivated to study lessons with traditional teaching methods. Computers, smartphones, tablets and other smart devices disturb students' attentions. Nevertheless, those smart devices can be used as auxiliary tools of modern teaching methods. In this article, the authors review two popular modern teaching methods:…

  19. Insufficient Governance Over Logistics Modernization Program System Development

    DTIC Science & Technology

    2010-11-02

    Controls Over the Prevalidation of DOD Commercial Payments,” March 2, 2007 Army USAAA Report No. A-2007-0205- FFM , “Logistics Modernization Program...0163- FFM , “FY 03–FY 05 Obligations Recorded in the Logistics Modernization Program,” July 27, 2007 USAAA Report No. A-2007-0154-ALR, “Follow up...Audit of Aged Accounts–U.S. Army Communications-Electronics Life Cycle Management Command,” July 2, 2007 USAAA Report No. A-2006-0234- FFM

  20. [Computer graphic display of retinal examination results. Software improving the quality of documenting fundus changes].

    PubMed

    Jürgens, Clemens; Grossjohann, Rico; Czepita, Damian; Tost, Frank

    2009-01-01

    Graphic documentation of retinal examination results in clinical ophthalmological practice is often depicted using pictures or in handwritten form. Popular software products used to describe changes in the fundus do not vary much from simple graphic programs that enable to insert, scale and edit basic graphic elements such as: a circle, rectangle, arrow or text. Displaying the results of retinal examinations in a unified way is difficult to achieve. Therefore, we devised and implemented modern software tools for this purpose. A computer program enabling to quickly and intuitively form graphs of the fundus, that can be digitally archived or printed was created. Especially for the needs of ophthalmological clinics, a set of standard digital symbols used to document the results of retinal examinations was developed and installed in a library of graphic symbols. These symbols are divided into the following categories: preoperative, postoperative, neovascularization, retinopathy of prematurity. The appropriate symbol can be selected with a click of the mouse and dragged-and-dropped on the canvas of the fundus. Current forms of documenting results of retinal examinations are unsatisfactory, due to the fact that they are time consuming and imprecise. Unequivocal interpretation is difficult or in some cases impossible. Using the developed computer program a sketch of the fundus can be created much more quickly than by hand drawing. Additionally the quality of the medica documentation using a system of well described and standardized symbols will be enhanced. (1) Graphic symbols used to document the results of retinal examinations are a part of everyday clinical practice. (2) The designed computer program will allow quick and intuitive graphical creation of fundus sketches that can be either digitally archived or printed.

  1. Oscillatory threshold logic.

    PubMed

    Borresen, Jon; Lynch, Stephen

    2012-01-01

    In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory.

  2. Using Modeling and Simulation to Complement Testing for Increased Understanding of Weapon Subassembly Response.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Michael K.; Davidson, Megan

    As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less

  3. Managing the technological edge: the UNESCO International Computation Centre and the limits to the transfer of computer technology, 1946-61.

    PubMed

    Nofre, David

    2014-07-01

    The spread of the modern computer is assumed to have been a smooth process of technology transfer. This view relies on an assessment of the open circulation of knowledge ensured by the US and British governments in the early post-war years. This article presents new historical evidence that question this view. At the centre of the article lies the ill-fated establishment of the UNESCO International Computation Centre. The project was initially conceived in 1946 to provide advanced computation capabilities to scientists of all nations. It soon became a prize sought by Western European countries like The Netherlands and Italy seeking to speed up their own national research programs. Nonetheless, as the article explains, the US government's limitations on the research function of the future centre resulted in the withdrawal of European support for the project. These limitations illustrate the extent to which US foreign science policy could operate as (stealth) industrial policy to secure a competitive technological advantage and the prospects of US manufacturers in a future European market.

  4. A paperless course on structural engineering programming: investing in educational technology in the times of the Greek financial recession

    NASA Astrophysics Data System (ADS)

    Sextos, Anastasios G.

    2014-01-01

    This paper presents the structure of an undergraduate course entitled 'programming techniques and the use of specialised software in structural engineering' which is offered to the fifth (final) year students of the Civil Engineering Department of Aristotle University Thessaloniki in Greece. The aim of this course is to demonstrate the use of new information technologies in the field of structural engineering and to teach modern programming and finite element simulation techniques that the students can in turn apply in both research and everyday design of structures. The course also focuses on the physical interpretation of structural engineering problems, in a way that the students become familiar with the concept of computational tools without losing perspective from the engineering problem studied. For this purpose, a wide variety of structural engineering problems are studied in class, involving structural statics, dynamics, earthquake engineering, design of reinforced concrete and steel structures as well as data and information management. The main novelty of the course is that it is taught and examined solely in the computer laboratory ensuring that each student can accomplish the prescribed 'hands-on' training on a dedicated computer, strictly on a 1:1 student over hardware ratio. Significant effort has also been put so that modern educational techniques and tools are utilised to offer the course in an essentially paperless mode. This involves electronic educational material, video tutorials, student information in real time and exams given and assessed electronically through an ad hoc developed, personalised, electronic system. The positive feedback received from the students reveals that the concept of a paperless course is not only applicable in real academic conditions but is also a promising approach that significantly increases student productivity and engagement. The question, however, is whether such an investment in educational technology is indeed timely during economic recession, where the academic priorities are rapidly changing. In the light of this unfavourable and unstable financial environment, a critical overview of the strengths, the weaknesses, the opportunities and the threats of this effort is presented herein, hopefully contributing to the discussion on the future of higher education in the time of crisis.

  5. Computational mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raboin, P J

    1998-01-01

    The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D.more » Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.« less

  6. InSAR Scientific Computing Environment

    NASA Astrophysics Data System (ADS)

    Gurrola, E. M.; Rosen, P. A.; Sacco, G.; Zebker, H. A.; Simons, M.; Sandwell, D. T.

    2010-12-01

    The InSAR Scientific Computing Environment (ISCE) is a software development effort in its second year within the NASA Advanced Information Systems and Technology program. The ISCE will provide a new computing environment for geodetic image processing for InSAR sensors that will enable scientists to reduce measurements directly from radar satellites and aircraft to new geophysical products without first requiring them to develop detailed expertise in radar processing methods. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. The NRC Decadal Survey-recommended DESDynI mission will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment is planned to become a key element in processing DESDynI data into higher level data products and it is expected to enable a new class of analyses that take greater advantage of the long time and large spatial scales of these new data, than current approaches. At the core of ISCE is both legacy processing software from the JPL/Caltech ROI_PAC repeat-pass interferometry package as well as a new InSAR processing package containing more efficient and more accurate processing algorithms being developed at Stanford for this project that is based on experience gained in developing processors for missions such as SRTM and UAVSAR. Around the core InSAR processing programs we are building object-oriented wrappers to enable their incorporation into a more modern, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models, and a robust, intuitive user interface with graduated exposure to the levels of sophistication, allowing novices to apply it readily for common tasks and experienced users to mine data with great facility and flexibility. The environment is designed to easily allow user contributions, enabling an open source community to extend the framework into the indefinite future. In this paper we briefly describe both the legacy and the new core processing algorithms and their integration into the new computing environment. We describe the ISCE component and application architecture and the features that permit the desired flexibility, extensibility and ease-of-use. We summarize the state of progress of the environment and the plans for completion of the environment and for its future introduction into the radar processing community.

  7. A systemic approach for modeling biological evolution using Parallel DEVS.

    PubMed

    Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo

    2015-08-01

    A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. A Format for Phylogenetic Placements

    PubMed Central

    Matsen, Frederick A.; Hoffman, Noah G.; Gallagher, Aaron; Stamatakis, Alexandros

    2012-01-01

    We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement. PMID:22383988

  9. A format for phylogenetic placements.

    PubMed

    Matsen, Frederick A; Hoffman, Noah G; Gallagher, Aaron; Stamatakis, Alexandros

    2012-01-01

    We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads) into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement.

  10. A revision of the subtract-with-borrow random number generators

    NASA Astrophysics Data System (ADS)

    Sibidanov, Alexei

    2017-12-01

    The most popular and widely used subtract-with-borrow generator, also known as RANLUX, is reimplemented as a linear congruential generator using large integer arithmetic with the modulus size of 576 bits. Modern computers, as well as the specific structure of the modulus inferred from RANLUX, allow for the development of a fast modular multiplication - the core of the procedure. This was previously believed to be slow and have too high cost in terms of computing resources. Our tests show a significant gain in generation speed which is comparable with other fast, high quality random number generators. An additional feature is the fast skipping of generator states leading to a seeding scheme which guarantees the uniqueness of random number sequences. Licensing provisions: GPLv3 Programming language: C++, C, Assembler

  11. Structural Weight Estimation for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd

    2002-01-01

    This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.

  12. The Impact of Modernization Programs on Academic Teachers' Work: A Mexican Case Study

    ERIC Educational Resources Information Center

    Zavala, Blanca Arciga

    2006-01-01

    For more than ten years, academics of public universities in Mexico have endured modernization programs that promote individual productivity and operate as a mechanism of selection and assessment. The implementation of the programs has exposed a tension between the values implicit in the programs and the values of the academic teachers. There is a…

  13. On the Emergence of Modern Humans

    ERIC Educational Resources Information Center

    Amati, Daniele; Shallice, Tim

    2007-01-01

    The emergence of modern humans with their extraordinary cognitive capacities is ascribed to a novel type of cognitive computational process (sustained non-routine multi-level operations) required for abstract projectuality, held to be the common denominator of the cognitive capacities specific to modern humans. A brain operation (latching) that…

  14. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    PubMed

    Aoun, Bachir

    2016-05-05

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. © 2016 Wiley Periodicals, Inc.

  15. Fullrmc, a rigid body reverse monte carlo modeling package enabled with machine learning and artificial intelligence

    DOE PAGES

    Aoun, Bachir

    2016-01-22

    Here, a new Reverse Monte Carlo (RMC) package ‘fullrmc’ for atomic or rigid body and molecular, amorphous or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython ,C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with amore » set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modelling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. Also fullrmc provides a unique way with almost no additional computational cost to recur a group’s selection, allowing the system to go out of local minimas by refining a group’s position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.« less

  16. Fullrmc, a rigid body reverse monte carlo modeling package enabled with machine learning and artificial intelligence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aoun, Bachir

    Here, a new Reverse Monte Carlo (RMC) package ‘fullrmc’ for atomic or rigid body and molecular, amorphous or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython ,C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with amore » set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modelling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. Also fullrmc provides a unique way with almost no additional computational cost to recur a group’s selection, allowing the system to go out of local minimas by refining a group’s position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.« less

  17. Non-Formal Education in Ethiopia: The Modern Sector. Program of Studies in Non-Formal Education. Discussion Papers. No. 6.

    ERIC Educational Resources Information Center

    Niehoff, Richard O.; Wilder, Bernard

    Nonformal education programs operating in the modern sector in Ethiopia are described in a perspective relevant to the Ethiopian context. The modern sector is defined as those activities concerned with the manufacture of goods, extraction of raw materials, the processing of raw materials, the provision of services, and the creation and maintenance…

  18. Through Kazan ASPERA to Modern Projects

    NASA Astrophysics Data System (ADS)

    Gusev, Alexander; Kitiashvili, Irina; Petrova, Natasha

    Now the European Union form the Sixth Framework Programme. One of its the objects of the EU Programme is opening national researches and training programmes. The Russian PhD students and young astronomers have business and financial difficulties in access to modern databases and astronomical projects and so they has not been included in European overview of priorities. Modern requirements to the organization of observant projects on powerful telescopes assumes painstaking scientific computer preparation of the application. A rigid competition for observation time assume preliminary computer modeling of target object for success of the application. Kazan AstroGeoPhysics Partnership

  19. Oscillatory Threshold Logic

    PubMed Central

    Borresen, Jon; Lynch, Stephen

    2012-01-01

    In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory. PMID:23173034

  20. RECENT ADVANCES IN MACROMOLECULAR HYDRODYNAMIC MODELING

    PubMed Central

    Aragon, Sergio R.

    2010-01-01

    The modern implementation of the boundary element method (S.R. Aragon, J. Comput. Chem. 25(2004)1191–12055) has ushered unprecedented accuracy and precision for the solution of the Stokes equations of hydrodynamics with stick boundary conditions. This article begins by reviewing computations with the program BEST of smooth surface objects such as ellipsoids, the dumbbell, and cylinders that demonstrate that the numerical solution of the integral equation formulation of hydrodynamics yields very high precision and accuracy. When BEST is used for macromolecular computations, the limiting factor becomes the definition of the molecular hydrodynamic surface and the implied effective solvation of the molecular surface. Studies on 49 different proteins, ranging in molecular weight from 9 to over 400 kDa, have shown that a model using a 1.1 A thick hydration layer describes all protein transport properties very well for the overwhelming majority of them. In addition, this data implies that the crystal structure is an excellent representation of the average solution structure for most of them. In order to investigate the origin of a handful of significant discrepancies in some multimeric proteins (over −20% observed in the intrinsic viscosity), the technique of Molecular Dynamics simulation (MD) has been incorporated into the research program. A preliminary study of dimeric α-chymotrypsin using approximate implicit water MD is presented. In addition I describe the successful validation of modern protein force fields, ff03 and ff99SB, for the accurate computation of solution structure in explicit water simulation by comparison of trajectory ensemble average computed transport properties with experimental measurements. This work includes small proteins such as lysozyme, ribonuclease and ubiquitin using trajectories around 10 ns duration. We have also studied a 150 kDa flexible monoclonal IgG antibody, trastuzumab, with multiple independent trajectories encompassing over 320 ns of simulation. The close agreement within experimental error of the computed and measured properties allows us to conclude that MD does produce structures typical of those in solution, and that flexible molecules can be properly described using the method of ensemble averaging over a trajectory. We review similar work on the study of a transfer RNA molecule and DNA oligomers that demonstrate that within 3% a simple uniform hydration model 1.1 A thick provides agreement with experiment for these nucleic acids. In the case of linear oligomers, the precision can be improved close to 1% by a non-uniform hydration model that hydrates mainly in the DNA grooves, in agreement with high resolution x-ray diffraction. We conclude with a vista on planned improvements for the BEST program to decrease its memory requirements and increase its speed without sacrificing accuracy. PMID:21073955

  1. Modern drug discovery technologies: opportunities and challenges in lead discovery.

    PubMed

    Guido, Rafael V C; Oliva, Glaucius; Andricopulo, Adriano D

    2011-12-01

    The identification of promising hits and the generation of high quality leads are crucial steps in the early stages of drug discovery projects. The definition and assessment of both chemical and biological space have revitalized the screening process model and emphasized the importance of exploring the intrinsic complementary nature of classical and modern methods in drug research. In this context, the widespread use of combinatorial chemistry and sophisticated screening methods for the discovery of lead compounds has created a large demand for small organic molecules that act on specific drug targets. Modern drug discovery involves the employment of a wide variety of technologies and expertise in multidisciplinary research teams. The synergistic effects between experimental and computational approaches on the selection and optimization of bioactive compounds emphasize the importance of the integration of advanced technologies in drug discovery programs. These technologies (VS, HTS, SBDD, LBDD, QSAR, and so on) are complementary in the sense that they have mutual goals, thereby the combination of both empirical and in silico efforts is feasible at many different levels of lead optimization and new chemical entity (NCE) discovery. This paper provides a brief perspective on the evolution and use of key drug design technologies, highlighting opportunities and challenges.

  2. The emergence of mind and brain: an evolutionary, computational, and philosophical approach.

    PubMed

    Mainzer, Klaus

    2008-01-01

    Modern philosophy of mind cannot be understood without recent developments in computer science, artificial intelligence (AI), robotics, neuroscience, biology, linguistics, and psychology. Classical philosophy of formal languages as well as symbolic AI assume that all kinds of knowledge must explicitly be represented by formal or programming languages. This assumption is limited by recent insights into the biology of evolution and developmental psychology of the human organism. Most of our knowledge is implicit and unconscious. It is not formally represented, but embodied knowledge, which is learnt by doing and understood by bodily interacting with changing environments. That is true not only for low-level skills, but even for high-level domains of categorization, language, and abstract thinking. The embodied mind is considered an emergent capacity of the brain as a self-organizing complex system. Actually, self-organization has been a successful strategy of evolution to handle the increasing complexity of the world. Genetic programs are not sufficient and cannot prepare the organism for all kinds of complex situations in the future. Self-organization and emergence are fundamental concepts in the theory of complex dynamical systems. They are also applied in organic computing as a recent research field of computer science. Therefore, cognitive science, AI, and robotics try to model the embodied mind in an artificial evolution. The paper analyzes these approaches in the interdisciplinary framework of complex dynamical systems and discusses their philosophical impact.

  3. Evaluating Modern Defenses Against Control Flow Hijacking

    DTIC Science & Technology

    2015-09-01

    unsound and could introduce false negatives (opening up another possible set of attacks). CFG Construction using DSA We next evaluate the precision of CFG...Evaluating Modern Defenses Against Control Flow Hijacking by Ulziibayar Otgonbaatar Submitted to the Department of Electrical Engineering and...Computer Science in partial fulfillment of the requirements for the degree of Master of Science in Computer Science and Engineering at the MASSACHUSETTS

  4. Translations on Eastern Europe, Scientific Affairs, Number 542.

    DTIC Science & Technology

    1977-04-18

    transplanting human tissue has not as yet been given a final juridical approval like euthanasia, artificial insemination , abortion, birth control, and others...and data teleprocessing. This computer may also be used as a satellite computer for complex systems. The IZOT 310 has a large instruction...a well-known truth that modern science is using the most modern and leading technical facilities—from bathyscaphes to satellites , from gigantic

  5. Resiliency in Future Cyber Combat

    DTIC Science & Technology

    2016-04-04

    including the Internet , telecommunications networks, computer systems, and embed- ded processors and controllers.”6 One important point emerging from the...definition is that while the Internet is part of cyberspace, it is not all of cyberspace. Any computer processor capable of communicating with a...central proces- sor on a modern car are all part of cyberspace, although only some of them are routinely connected to the Internet . Most modern

  6. How to get students to love (or not hate) MATLAB and programming

    NASA Astrophysics Data System (ADS)

    Reckinger, Shanon; Reckinger, Scott

    2014-11-01

    An effective programming course geared toward engineering students requires the utilization of modern teaching philosophies. A newly designed course that focuses on programming in MATLAB involves flipping the classroom and integrating various active teaching techniques. Vital aspects of the new course design include: lengthening in-class contact hours, Process-Oriented Guided Inquiry Learning (POGIL) method worksheets (self-guided instruction), student created video content posted on YouTube, clicker questions (used in class to practice reading and debugging code), programming exams that don't require computers, integrating oral exams into the classroom, fostering an environment for formal and informal peer learning, and designing in a broader theme to tie together assignments. However, possibly the most important piece to this programming course puzzle: the instructor needs to be able to find programming mistakes very fast and then lead individuals and groups through the steps to find their mistakes themselves. The effectiveness of the new course design is demonstrated through pre- and post- concept exam results and student evaluation feedback. Students reported that the course was challenging and required a lot of effort, but left largely positive feedback.

  7. ESIF 2016: Modernizing Our Grid and Energy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Becelaere, Kimberly

    This 2016 annual report highlights work conducted at the Energy Systems Integration Facility (ESIF) in FY 2016, including grid modernization, high-performance computing and visualization, and INTEGRATE projects.

  8. Implications of the Turing machine model of computation for processor and programming language design

    NASA Astrophysics Data System (ADS)

    Hunter, Geoffrey

    2004-01-01

    A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing"s thesis on computation (1936) the "standard description" of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates; it may be allocated as its execution commences, and deallocated as its execution terminates, and if the amount of this local memory is not known until just before execution commencement, then it is essential that it be allocated dynamically as the first action of its execution. This dynamically allocated/deallocated storage of each subprogram"s intermediate values, conforms with the stack discipline; i.e. last allocated = first to be deallocated, an incidental benefit of which is automatic overlaying of variables. This stack-based dynamic memory allocation was a semantic implication of the nested block structure that originated in the ALGOL-60 programming language. AGLOL-60 was a TM language, because the amount of memory allocated on subprogram (block/procedure) entry (for arrays, etc) was computable at execution time. A more general requirement of a Turing machine process is for code generation at run-time; this mandates access to the source language processor (compiler/interpretor) during execution of the process. This fundamental aspect of computer science is important to the future of system design, because it has been overlooked throughout the 55 years since modern computing began in 1048. The popular computer systems of this first half-century of computing were constrained by compile-time (or even operating system boot-time) memory allocation, and were thus limited to executing FA processes. The practical effect was that the distinction between the data-invariant program and its variable data was blurred; programmers had to make trial and error executions, modifying the program"s compile-time constants (array dimensions) to iterate towards the values required at run-time by the data being processed. This era of trial and error computing still persists; it pervades the culture of current (2003) computing practice.

  9. Computer-based teaching module design: principles derived from learning theories.

    PubMed

    Lau, K H Vincent

    2014-03-01

    The computer-based teaching module (CBTM), which has recently gained prominence in medical education, is a teaching format in which a multimedia program serves as a single source for knowledge acquisition rather than playing an adjunctive role as it does in computer-assisted learning (CAL). Despite empirical validation in the past decade, there is limited research into the optimisation of CBTM design. This review aims to summarise research in classic and modern multimedia-specific learning theories applied to computer learning, and to collapse the findings into a set of design principles to guide the development of CBTMs. Scopus was searched for: (i) studies of classic cognitivism, constructivism and behaviourism theories (search terms: 'cognitive theory' OR 'constructivism theory' OR 'behaviourism theory' AND 'e-learning' OR 'web-based learning') and their sub-theories applied to computer learning, and (ii) recent studies of modern learning theories applied to computer learning (search terms: 'learning theory' AND 'e-learning' OR 'web-based learning') for articles published between 1990 and 2012. The first search identified 29 studies, dominated in topic by the cognitive load, elaboration and scaffolding theories. The second search identified 139 studies, with diverse topics in connectivism, discovery and technical scaffolding. Based on their relative representation in the literature, the applications of these theories were collapsed into a list of CBTM design principles. Ten principles were identified and categorised into three levels of design: the global level (managing objectives, framing, minimising technical load); the rhetoric level (optimising modality, making modality explicit, scaffolding, elaboration, spaced repeating), and the detail level (managing text, managing devices). This review examined the literature in the application of learning theories to CAL to develop a set of principles that guide CBTM design. Further research will enable educators to take advantage of this unique teaching format as it gains increasing importance in medical education. © 2014 John Wiley & Sons Ltd.

  10. Numerical and experimental analyses of lighting columns in terms of passive safety

    NASA Astrophysics Data System (ADS)

    Jedliński, Tomasz Ireneusz; Buśkiewicz, Jacek

    2018-01-01

    Modern lighting columns have a very beneficial influence on road safety. Currently, the columns are being designed to keep the driver safe in the event of a car collision. The following work compares experimental results of vehicle impact on a lighting column with FEM simulations performed using the Ansys LS-DYNA program. Due to high costs of experiments and time-consuming research process, the computer software seems to be very useful utility in the development of pole structures, which are to absorb kinetic energy of the vehicle in a precisely prescribed way.

  11. Remote sensing program

    NASA Technical Reports Server (NTRS)

    Whitmore, R. A., Jr. (Principal Investigator)

    1980-01-01

    A syllabus and training materials prepared and used in a series of one-day workshops to introduce modern remote sensing technology to selected groups of professional personnel in Vermont are described. Success in using computer compatible tapes, LANDSAT imagery and aerial photographs is reported for the following applications: (1) mapping defoliation of hardwood forests by tent caterpillar and gypsy moth; (2) differentiating conifer species; (3) mapping ground cover of major lake and pond watersheds; (4) inventorying and locating artificially regenerated conifer forest stands; (5) mapping water quality; (6) ascertaining the boat population to quantify recreational activity on lakes and waterways; and (7) identifying potential aquaculture sites.

  12. Aerothermal modeling program. Phase 2, element B: Flow interaction experiment

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; Mongia, H. C.; Murthy, S. N. B.; Sullivan, J. P.

    1987-01-01

    NASA has instituted an extensive effort to improve the design process and data base for the hot section components of gas turbine engines. The purpose of element B is to establish a benchmark quality data set that consists of measurements of the interaction of circular jets with swirling flow. Such flows are typical of those that occur in the primary zone of modern annular combustion liners. Extensive computations of the swirling flows are to be compared with the measurements for the purpose of assessing the accuracy of current physical models used to predict such flows.

  13. On critical exponents without Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Sen, Kallol; Sinha, Aninda

    2016-11-01

    In order to achieve a better analytic handle on the modern conformal bootstrap program, we re-examine and extend the pioneering 1974 work of Polyakov’s, which was based on consistency between the operator product expansion and unitarity. As in the bootstrap approach, this method does not depend on evaluating Feynman diagrams. We show how this approach can be used to compute the anomalous dimensions of certain operators in the O(n) model at the Wilson-Fisher fixed point in 4-ɛ dimensions up to O({ɛ }2). AS dedicates this work to the loving memory of his mother.

  14. Parallel Implementation of Numerical Solution of Few-Body Problem Using Feynman's Continual Integrals

    NASA Astrophysics Data System (ADS)

    Naumenko, Mikhail; Samarin, Viacheslav

    2018-02-01

    Modern parallel computing algorithm has been applied to the solution of the few-body problem. The approach is based on Feynman's continual integrals method implemented in C++ programming language using NVIDIA CUDA technology. A wide range of 3-body and 4-body bound systems has been considered including nuclei described as consisting of protons and neutrons (e.g., 3,4He) and nuclei described as consisting of clusters and nucleons (e.g., 6He). The correctness of the results was checked by the comparison with the exactly solvable 4-body oscillatory system and experimental data.

  15. CCP4i2: the new graphical user interface to the CCP4 program suite

    PubMed Central

    Potterton, Liz; Ballard, Charles; Dodson, Eleanor; Evans, Phil R.; Keegan, Ronan; Krissinel, Eugene; Stevenson, Kyle; Lebedev, Andrey; McNicholas, Stuart J.; Noble, Martin; Pannu, Navraj S.; Roth, Christian; Sheldrick, George; Skubak, Pavol; Uski, Ville

    2018-01-01

    The CCP4 (Collaborative Computational Project, Number 4) software suite for macromolecular structure determination by X-ray crystallography groups brings together many programs and libraries that, by means of well established conventions, interoperate effectively without adhering to strict design guidelines. Because of this inherent flexibility, users are often presented with diverse, even divergent, choices for solving every type of problem. Recently, CCP4 introduced CCP4i2, a modern graphical interface designed to help structural biologists to navigate the process of structure determination, with an emphasis on pipelining and the streamlined presentation of results. In addition, CCP4i2 provides a framework for writing structure-solution scripts that can be built up incrementally to create increasingly automatic procedures. PMID:29533233

  16. An Advanced simulation Code for Modeling Inductive Output Tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing currentmore » density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.« less

  17. Teacher's Guide to SERAPHIM Software III. Modern Chemistry.

    ERIC Educational Resources Information Center

    Bogner, Donna J.

    Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the third in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Modern Chemistry." Program suggestions are arranged in the same…

  18. Teacher's Guide to SERAPHIM Software IV Chemistry: A Modern Course.

    ERIC Educational Resources Information Center

    Bogner, Donna J.

    Designed to assist chemistry teachers in selecting appropriate software programs, this publication is the fourth in a series of six teacher's guides from Project SERAPHIM, a program sponsored by the National Science Foundation. This guide is keyed to the chapters of the text "Chemistry: A Modern Course." Program suggestions are arranged…

  19. Teaching Strategies and Methods in Modern Environments for Learning of Programming

    ERIC Educational Resources Information Center

    Djenic, Slobodanka; Mitic, Jelena

    2017-01-01

    This paper presents teaching strategies and methods, applicable in modern blended environments for learning of programming. Given the fact that the manner of applying teaching strategies always depends on the specific requirements of a certain area of learning, the paper outlines the basic principles of teaching in programming courses, as well as…

  20. The Helicopter Antenna Radiation Prediction Code (HARP)

    NASA Technical Reports Server (NTRS)

    Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.

    1990-01-01

    The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.

  1. Computing in Hydraulic Engineering Education

    NASA Astrophysics Data System (ADS)

    Duan, J. G.

    2011-12-01

    Civil engineers, pioneers of our civilization, are rarely perceived as leaders and innovators in modern society because of retardations in technology innovation. This crisis has resulted in the decline of the prestige of civil engineering profession, reduction of federal funding on deteriorating infrastructures, and problems with attracting the most talented high-school students. Infusion of cutting-edge computer technology and stimulating creativity and innovation therefore are the critical challenge to civil engineering education. To better prepare our graduates to innovate, this paper discussed the adaption of problem-based collaborative learning technique and integration of civil engineering computing into a traditional civil engineering curriculum. Three interconnected courses: Open Channel Flow, Computational Hydraulics, and Sedimentation Engineering, were developed with emphasis on computational simulations. In Open Channel flow, the focuses are principles of free surface flow and the application of computational models. This prepares students to the 2nd course, Computational Hydraulics, that introduce the fundamental principles of computational hydraulics, including finite difference and finite element methods. This course complements the Open Channel Flow class to provide students with in-depth understandings of computational methods. The 3rd course, Sedimentation Engineering, covers the fundamentals of sediment transport and river engineering, so students can apply the knowledge and programming skills gained from previous courses to develop computational models for simulating sediment transport. These courses effectively equipped students with important skills and knowledge to complete thesis and dissertation research.

  2. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  3. Feasibility study for the implementation of NASTRAN on the ILLIAC 4 parallel processor

    NASA Technical Reports Server (NTRS)

    Field, E. I.

    1975-01-01

    The ILLIAC IV, a fourth generation multiprocessor using parallel processing hardware concepts, is operational at Moffett Field, California. Its capability to excel at matrix manipulation, makes the ILLIAC well suited for performing structural analyses using the finite element displacement method. The feasibility of modifying the NASTRAN (NASA structural analysis) computer program to make effective use of the ILLIAC IV was investigated. The characteristics are summarized of the ILLIAC and the ARPANET, a telecommunications network which spans the continent making the ILLIAC accessible to nearly all major industrial centers in the United States. Two distinct approaches are studied: retaining NASTRAN as it now operates on many of the host computers of the ARPANET to process the input and output while using the ILLIAC only for the major computational tasks, and installing NASTRAN to operate entirely in the ILLIAC environment. Though both alternatives offer similar and significant increases in computational speed over modern third generation processors, the full installation of NASTRAN on the ILLIAC is recommended. Specifications are presented for performing that task with manpower estimates and schedules to correspond.

  4. LARCRIM user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Davis, John S.; Heaphy, William J.

    1993-01-01

    LARCRIM is a relational database management system (RDBMS) which performs the conventional duties of an RDBMS with the added feature that it can store attributes which consist of arrays or matrices. This makes it particularly valuable for scientific data management. It is accessible as a stand-alone system and through an application program interface. The stand-alone system may be executed in two modes: menu or command. The menu mode prompts the user for the input required to create, update, and/or query the database. The command mode requires the direct input of LARCRIM commands. Although LARCRIM is an update of an old database family, its performance on modern computers is quite satisfactory. LARCRIM is written in FORTRAN 77 and runs under the UNIX operating system. Versions have been released for the following computers: SUN (3 & 4), Convex, IRIS, Hewlett-Packard, CRAY 2 & Y-MP.

  5. UC Merced Center for Computational Biology Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colvin, Michael; Watanabe, Masakatsu

    Final report for the UC Merced Center for Computational Biology. The Center for Computational Biology (CCB) was established to support multidisciplinary scientific research and academic programs in computational biology at the new University of California campus in Merced. In 2003, the growing gap between biology research and education was documented in a report from the National Academy of Sciences, Bio2010 Transforming Undergraduate Education for Future Research Biologists. We believed that a new type of biological sciences undergraduate and graduate programs that emphasized biological concepts and considered biology as an information science would have a dramatic impact in enabling the transformationmore » of biology. UC Merced as newest UC campus and the first new U.S. research university of the 21st century was ideally suited to adopt an alternate strategy - to create a new Biological Sciences majors and graduate group that incorporated the strong computational and mathematical vision articulated in the Bio2010 report. CCB aimed to leverage this strong commitment at UC Merced to develop a new educational program based on the principle of biology as a quantitative, model-driven science. Also we expected that the center would be enable the dissemination of computational biology course materials to other university and feeder institutions, and foster research projects that exemplify a mathematical and computations-based approach to the life sciences. As this report describes, the CCB has been successful in achieving these goals, and multidisciplinary computational biology is now an integral part of UC Merced undergraduate, graduate and research programs in the life sciences. The CCB began in fall 2004 with the aid of an award from U.S. Department of Energy (DOE), under its Genomes to Life program of support for the development of research and educational infrastructure in the modern biological sciences. This report to DOE describes the research and academic programs made possible by the CCB from its inception until August, 2010, at the end of the final extension. Although DOE support for the center ended in August 2010, the CCB will continue to exist and support its original objectives. The research and academic programs fostered by the CCB have led to additional extramural funding from other agencies, and we anticipate that CCB will continue to provide support for quantitative and computational biology program at UC Merced for many years to come. Since its inception in fall 2004, CCB research projects have continuously had a multi-institutional collaboration with Lawrence Livermore National Laboratory (LLNL), and the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, as well as individual collaborators at other sites. CCB affiliated faculty cover a broad range of computational and mathematical research including molecular modeling, cell biology, applied math, evolutional biology, bioinformatics, etc. The CCB sponsored the first distinguished speaker series at UC Merced, which had an important role is spreading the word about the computational biology emphasis at this new campus. One of CCB's original goals is to help train a new generation of biologists who bridge the gap between the computational and life sciences. To archive this goal, by summer 2006, a new program - summer undergraduate internship program, have been established under CCB to train the highly mathematical and computationally intensive Biological Science researchers. By the end of summer 2010, 44 undergraduate students had gone through this program. Out of those participants, 11 students have been admitted to graduate schools and 10 more students are interested in pursuing graduate studies in the sciences. The center is also continuing to facilitate the development and dissemination of undergraduate and graduate course materials based on the latest research in computational biology.« less

  6. chemf: A purely functional chemistry toolkit.

    PubMed

    Höck, Stefan; Riedl, Rainer

    2012-12-20

    Although programming in a type-safe and referentially transparent style offers several advantages over working with mutable data structures and side effects, this style of programming has not seen much use in chemistry-related software. Since functional programming languages were designed with referential transparency in mind, these languages offer a lot of support when writing immutable data structures and side-effects free code. We therefore started implementing our own toolkit based on the above programming paradigms in a modern, versatile programming language. We present our initial results with functional programming in chemistry by first describing an immutable data structure for molecular graphs together with a couple of simple algorithms to calculate basic molecular properties before writing a complete SMILES parser in accordance with the OpenSMILES specification. Along the way we show how to deal with input validation, error handling, bulk operations, and parallelization in a purely functional way. At the end we also analyze and improve our algorithms and data structures in terms of performance and compare it to existing toolkits both object-oriented and purely functional. All code was written in Scala, a modern multi-paradigm programming language with a strong support for functional programming and a highly sophisticated type system. We have successfully made the first important steps towards a purely functional chemistry toolkit. The data structures and algorithms presented in this article perform well while at the same time they can be safely used in parallelized applications, such as computer aided drug design experiments, without further adjustments. This stands in contrast to existing object-oriented toolkits where thread safety of data structures and algorithms is a deliberate design decision that can be hard to implement. Finally, the level of type-safety achieved by Scala highly increased the reliability of our code as well as the productivity of the programmers involved in this project.

  7. chemf: A purely functional chemistry toolkit

    PubMed Central

    2012-01-01

    Background Although programming in a type-safe and referentially transparent style offers several advantages over working with mutable data structures and side effects, this style of programming has not seen much use in chemistry-related software. Since functional programming languages were designed with referential transparency in mind, these languages offer a lot of support when writing immutable data structures and side-effects free code. We therefore started implementing our own toolkit based on the above programming paradigms in a modern, versatile programming language. Results We present our initial results with functional programming in chemistry by first describing an immutable data structure for molecular graphs together with a couple of simple algorithms to calculate basic molecular properties before writing a complete SMILES parser in accordance with the OpenSMILES specification. Along the way we show how to deal with input validation, error handling, bulk operations, and parallelization in a purely functional way. At the end we also analyze and improve our algorithms and data structures in terms of performance and compare it to existing toolkits both object-oriented and purely functional. All code was written in Scala, a modern multi-paradigm programming language with a strong support for functional programming and a highly sophisticated type system. Conclusions We have successfully made the first important steps towards a purely functional chemistry toolkit. The data structures and algorithms presented in this article perform well while at the same time they can be safely used in parallelized applications, such as computer aided drug design experiments, without further adjustments. This stands in contrast to existing object-oriented toolkits where thread safety of data structures and algorithms is a deliberate design decision that can be hard to implement. Finally, the level of type-safety achieved by Scala highly increased the reliability of our code as well as the productivity of the programmers involved in this project. PMID:23253942

  8. SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    PubMed Central

    Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang

    2009-01-01

    The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569

  9. Computational Methods to Work as First-Pass Filter in Deleterious SNP Analysis of Alkaptonuria

    PubMed Central

    Magesh, R.; George Priya Doss, C.

    2012-01-01

    A major challenge in the analysis of human genetic variation is to distinguish functional from nonfunctional SNPs. Discovering these functional SNPs is one of the main goals of modern genetics and genomics studies. There is a need to effectively and efficiently identify functionally important nsSNPs which may be deleterious or disease causing and to identify their molecular effects. The prediction of phenotype of nsSNPs by computational analysis may provide a good way to explore the function of nsSNPs and its relationship with susceptibility to disease. In this context, we surveyed and compared variation databases along with in silico prediction programs to assess the effects of deleterious functional variants on protein functions. In other respects, we attempted these methods to work as first-pass filter to identify the deleterious substitutions worth pursuing for further experimental research. In this analysis, we used the existing computational methods to explore the mutation-structure-function relationship in HGD gene causing alkaptonuria. PMID:22606059

  10. Exploiting current-generation graphics hardware for synthetic-scene generation

    NASA Astrophysics Data System (ADS)

    Tanner, Michael A.; Keen, Wayne A.

    2010-04-01

    Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.

  11. High performance in silico virtual drug screening on many-core processors.

    PubMed

    McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-05-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.

  12. High performance in silico virtual drug screening on many-core processors

    PubMed Central

    Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-01-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel’s Xeon Phi and multi-core CPUs with SIMD instruction sets. PMID:25972727

  13. Artificial Intelligence in Medical Practice: The Question to the Answer?

    PubMed

    Miller, D Douglas; Brown, Eric W

    2018-02-01

    Computer science advances and ultra-fast computing speeds find artificial intelligence (AI) broadly benefitting modern society-forecasting weather, recognizing faces, detecting fraud, and deciphering genomics. AI's future role in medical practice remains an unanswered question. Machines (computers) learn to detect patterns not decipherable using biostatistics by processing massive datasets (big data) through layered mathematical models (algorithms). Correcting algorithm mistakes (training) adds to AI predictive model confidence. AI is being successfully applied for image analysis in radiology, pathology, and dermatology, with diagnostic speed exceeding, and accuracy paralleling, medical experts. While diagnostic confidence never reaches 100%, combining machines plus physicians reliably enhances system performance. Cognitive programs are impacting medical practice by applying natural language processing to read the rapidly expanding scientific literature and collate years of diverse electronic medical records. In this and other ways, AI may optimize the care trajectory of chronic disease patients, suggest precision therapies for complex illnesses, reduce medical errors, and improve subject enrollment into clinical trials. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. The Computational Ecologist’s Toolbox

    EPA Science Inventory

    Computational ecology, nestled in the broader field of data science, is an interdisciplinary field that attempts to improve our understanding of complex ecological systems through the use of modern computational methods. Computational ecology is based on a union of competence in...

  15. Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.

    PubMed

    Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei

    2013-04-01

    The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.

  16. A LTA flight research vehicle. [technology assessment, airships

    NASA Technical Reports Server (NTRS)

    Nebiker, F. R.

    1975-01-01

    An Airship Flight Research Program is proposed. Major program objectives are summarized and a Modernized Navy ZPG3W Airship recommended as the flight test vehicle. The origin of the current interest in modern airship vehicles is briefly discussed and the major benefits resulting from the flight research program described. Airship configurations and specifications are included.

  17. Increasing family planning in Myanmar: the role of the private sector and social franchise programs.

    PubMed

    Aung, Tin; Hom, Nang Mo; Sudhinaraset, May

    2017-07-01

    This study examines the influence of clinical social franchise program on modern contraceptive use. This was a cross-sectional survey of contraceptive use among 2390 currently married women across 25 townships in Myanmar in 2014. Social franchise program measures were from programmatic records. Multivariable models show that women who lived in communities with at least 1-5 years of a clinical social franchise intrauterine device (IUD) program had 4.770 higher odds of using a modern contraceptive method compared to women living in communities with no IUD program [CI: 3.739-6.084]. Townships where the reproductive health program had existed for at least 10 years had 1.428 higher odds of reporting modern method use compared to women living in townships where the programs had existed for less than 10 years [CI: 1.016-2.008]. This study found consistent and robust evidence for an increase in family planning methods over program duration as well as intensity of social franchise programs.

  18. Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language

    NASA Astrophysics Data System (ADS)

    Heaphy, R. T.; Burke, M. P.; Love, J. T.

    2015-12-01

    Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.

  19. Modern Art as Public Care: Alzheimer's and the Aesthetics of Universal Personhood.

    PubMed

    Selberg, Scott

    2015-12-01

    This article is based on ethnographic research of the New York Museum of Modern Art's influential Alzheimer's access program, Meet Me at MoMA. The program belongs to an increasingly popular model of psychosocial treatment that promotes art as potentially therapeutic or beneficial to people experiencing symptoms of dementia as well as to their caregivers. Participant observation of the sessions and a series of interviews with museum staff and educators reveal broader assumptions about the relationship between modern art, dementia, and personhood. These assumptions indicate a museological investment in the capacity and perceived interiority of all participants. Ultimately, the program authorizes a narrative of universal personhood that harmonizes with the museum's longstanding focus on temporal and aesthetic modernism. © 2015 by the American Anthropological Association.

  20. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  1. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.

    PubMed

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  2. Computational Software to Fit Seismic Data Using Epidemic-Type Aftershock Sequence Models and Modeling Performance Comparisons

    NASA Astrophysics Data System (ADS)

    Chu, A.

    2016-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work implements three of the homogeneous ETAS models described in Ogata (1998). With a model's log-likelihood function, my software finds the Maximum-Likelihood Estimates (MLEs) of the model's parameters to estimate the homogeneous background rate and the temporal and spatial parameters that govern triggering effects. EM-algorithm is employed for its advantages of stability and robustness (Veen and Schoenberg, 2008). My work also presents comparisons among the three models in robustness, convergence speed, and implementations from theory to computing practice. Up-to-date regional seismic data of seismic active areas such as Southern California and Japan are used to demonstrate the comparisons. Data analysis has been done using computer languages Java and R. Java has the advantages of being strong-typed and easiness of controlling memory resources, while R has the advantages of having numerous available functions in statistical computing. Comparisons are also made between the two programming languages in convergence and stability, computational speed, and easiness of implementation. Issues that may affect convergence such as spatial shapes are discussed.

  3. The application of virtual reality systems as a support of digital manufacturing and logistics

    NASA Astrophysics Data System (ADS)

    Golda, G.; Kampa, A.; Paprocka, I.

    2016-08-01

    Modern trends in development of computer aided techniques are heading toward the integration of design competitive products and so-called "digital manufacturing and logistics", supported by computer simulation software. All phases of product lifecycle: starting from design of a new product, through planning and control of manufacturing, assembly, internal logistics and repairs, quality control, distribution to customers and after-sale service, up to its recycling or utilization should be aided and managed by advanced packages of product lifecycle management software. Important problems for providing the efficient flow of materials in supply chain management of whole product lifecycle, using computer simulation will be described on that paper. Authors will pay attention to the processes of acquiring relevant information and correct data, necessary for virtual modeling and computer simulation of integrated manufacturing and logistics systems. The article describes possibilities of use an applications of virtual reality software for modeling and simulation the production and logistics processes in enterprise in different aspects of product lifecycle management. The authors demonstrate effective method of creating computer simulations for digital manufacturing and logistics and show modeled and programmed examples and solutions. They pay attention to development trends and show options of the applications that go beyond enterprise.

  4. Optimal Jet Finder (v1.0 C++)

    NASA Astrophysics Data System (ADS)

    Chumakov, S.; Jankowski, E.; Tkachov, F. V.

    2006-10-01

    We describe a C++ implementation of the Optimal Jet Definition for identification of jets in hadronic final states of particle collisions. We explain interface subroutines and provide a usage example. The source code is available from http://www.inr.ac.ru/~ftkachov/projects/jets/. Program summaryTitle of program: Optimal Jet Finder (v1.0 C++) Catalogue identifier: ADSB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSB_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with a standard C++ compiler Tested with:GNU gcc 3.4.2, Linux Fedora Core 3, Intel i686; Forte Developer 7 C++ 5.4, SunOS 5.9, UltraSPARC III+; Microsoft Visual C++ Toolkit 2003 (compiler 13.10.3077, linker 7.10.30777, option /EHsc), Windows XP, Intel i686. Programming language used: C++ Memory required:˜1 MB (or more, depending on the settings) No. of lines in distributed program, including test data, etc.: 3047 No. of bytes in distributed program, including test data, etc.: 17 884 Distribution format: tar.gz Nature of physical problem: Analysis of hadronic final states in high energy particle collision experiments often involves identification of hadronic jets. A large number of hadrons detected in the calorimeter is reduced to a few jets by means of a jet finding algorithm. The jets are used in further analysis which would be difficult or impossible when applied directly to the hadrons. Grigoriev et al. [D.Yu. Grigoriev, E. Jankowski, F.V. Tkachov, Phys. Rev. Lett. 91 (2003) 061801] provide brief introduction to the subject of jet finding algorithms and a general review of the physics of jets can be found in [R. Barlow, Rep. Prog. Phys. 36 (1993) 1067]. Method of solution: The software we provide is an implementation of the so-called Optimal Jet Definition (OJD). The theory of OJD was developed in [F.V. Tkachov, Phys. Rev. Lett. 73 (1994) 2405; Erratum, Phys. Rev. Lett. 74 (1995) 2618; F.V. Tkachov, Int. J. Modern Phys. A 12 (1997) 5411; F.V. Tkachov, Int. J. Modern Phys. A 17 (2002) 2783]. The desired jet configuration is obtained as the one that minimizes Ω, a certain function of the input particles and jet configuration. A FORTRAN 77 implementation of OJD is described in [D.Yu. Grigoriev, E. Jankowski, F.V. Tkachov, Comput. Phys. Comm. 155 (2003) 42]. Restrictions on the complexity of the program: Memory required by the program is proportional to the number of particles in the input × the number of jets in the output. For example, for 650 particles and 20 jets ˜300 KB memory is required. Typical running time: The running time (in the running mode with a fixed number of jets) is proportional to the number of particles in the input × the number of jets in the output × times the number of different random initial configurations tried ( ntries). For example, for 65 particles in the input and 4 jets in the output, the running time is ˜4ṡ10 s per try (Pentium 4 2.8 GHz).

  5. How Computer Graphics Work.

    ERIC Educational Resources Information Center

    Prosise, Jeff

    This document presents the principles behind modern computer graphics without straying into the arcane languages of mathematics and computer science. Illustrations accompany the clear, step-by-step explanations that describe how computers draw pictures. The 22 chapters of the book are organized into 5 sections. "Part 1: Computer Graphics in…

  6. Effectiveness of a Computer-Based Training Program of Attention and Memory in Patients with Acquired Brain Damage

    PubMed Central

    Fernandez, Elizabeth; Bergado Rosado, Jorge A.; Rodriguez Perez, Daymi; Salazar Santana, Sonia; Torres Aguilar, Maydane; Bringas, Maria Luisa

    2017-01-01

    Many training programs have been designed using modern software to restore the impaired cognitive functions in patients with acquired brain damage (ABD). The objective of this study was to evaluate the effectiveness of a computer-based training program of attention and memory in patients with ABD, using a two-armed parallel group design, where the experimental group (n = 50) received cognitive stimulation using RehaCom software, and the control group (n = 30) received the standard cognitive stimulation (non-computerized) for eight weeks. In order to assess the possible cognitive changes after the treatment, a post-pre experimental design was employed using the following neuropsychological tests: Wechsler Memory Scale (WMS) and Trail Making test A and B. The effectiveness of the training procedure was statistically significant (p < 0.05) when it established the comparison between the performance in these scales, before and after the training period, in each patient and between the two groups. The training group had statistically significant (p < 0.001) changes in focused attention (Trail A), two subtests (digit span and logical memory), and the overall score of WMS. Finally, we discuss the advantages of computerized training rehabilitation and further directions of this line of work. PMID:29301194

  7. Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures

    DTIC Science & Technology

    2015-09-01

    soundness or completeness. An incomplete analysis will produce extra edges in the CFG that might allow an attacker to slip through. An unsound analysis...Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures by Isaac Noah Evans Submitted to the Department of Electrical...Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer

  8. Body metaphors--reading the body in contemporary culture.

    PubMed

    Skara, Danica

    2004-01-01

    This paper addresses the linguistic reframing of the human body in contemporary culture. Our aim is to provide a linguistic description of the ways in which the body is represented in modern English language. First, we will try to focus on body metaphors in general. We have collected a sample of 300 words and phrases functioning as body metaphors in modern English language. Reading the symbolism of the body we are witnessing changes in the basic metaphorical structuring of the human body. The results show that new vocabulary binds different fields of knowledge associated with machines and human beings according to a shared textual frame: human as computer and computer as human metaphor. Humans are almost blended with computers and vice versa. This metaphorical use of the human body and its parts reveals not only currents of unconscious though but also the structures of modern society and culture.

  9. 75 FR 33651 - Meetings of Humanities Panel

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-14

    ... Comparative Literature and Literary Theory in Fellowships, submitted to the Division of Research Programs at... meeting will review applications for Early Modern European History in Fellowships, submitted to the.... Room: 415. Program: This meeting will review applications for Modern European History I in Fellowships...

  10. Federal Aviation Administration : challenges in modernizing the agency

    DOT National Transportation Integrated Search

    2000-02-01

    FAA's efforts to implement initiatives in five key areas-air traffic control modernization, procurement and personnel reform, aviation safety, aviation and computer security, and financial management-have met with limited success. For example, FAA ha...

  11. Preparing Dairy Technologists

    ERIC Educational Resources Information Center

    Sliva, William R.

    1977-01-01

    The size of modern dairy plant operations has led to extreme specialization in product manufacturing, milk processing, microbiological analysis, chemical and mathematical computations. Morrisville Agricultural and Technical College, New York, has modernized its curricula to meet these changes. (HD)

  12. Sri Lanka--Canada School Library & Information Services Programme Components: A School Library Study Tour. Final Report.

    ERIC Educational Resources Information Center

    Brown, Gerald R.

    This document reports on a study tour of Canadian schools conducted by the Sri Lanka Ministry of Education. The purposes of the tour were to: develop an awareness of the scope of modern school library programming; investigate the aspects of implementation of a modern school library program including staffing, facilities, educational programming,…

  13. Modernization and unification: Strategic goals for NASA STI program

    NASA Technical Reports Server (NTRS)

    Blados, W.; Cotter, Gladys A.

    1993-01-01

    Information is increasingly becoming a strategic resource in all societies and economies. The NASA Scientific and Technical Information (STI) Program has initiated a modernization program to address the strategic importance and changing characteristics of information. This modernization effort applies new technology to current processes to provide near-term benefits to the user. At the same time, we are developing a long-term modernization strategy designed to transition the program to a multimedia, global 'library without walls.' Notwithstanding this modernization program, it is recognized that no one information center can hope to collect all the relevant data. We see information and information systems changing and becoming more international in scope. We are finding that many nations are expending resources on national systems which duplicate each other. At the same time that this duplication exists, many useful sources of aerospace information are not being collected because of resource limitations. If nations cooperate to develop an international aerospace information system, resources can be used efficiently to cover expanded sources of information. We must consider forming a coalition to collect and provide access to disparate, multidisciplinary sources of information, and to develop standardized tools for documenting and manipulating this data and information. In view of recent technological developments in information science and technology, as well as the reality of scarce resources in all nations, it is time to explore the mutually beneficial possibilities offered by cooperation and international resource sharing. International resources need to be mobilized in a coordinated manner to move us towards this goal. This paper reviews the NASA modernization program and raises for consideration new possibilities for unification of the various aerospace database efforts toward a cooperative international aerospace database initiative that can optimize the cost/benefit equation for all participants.

  14. Student Perceived Importance and Correlations of Selected Computer Literacy Course Topics

    ERIC Educational Resources Information Center

    Ciampa, Mark

    2013-01-01

    Traditional college-level courses designed to teach computer literacy are in a state of flux. Today's students have high rates of access to computing technology and computer ownership, leading many policy decision makers to conclude that students already are computer literate and thus computer literacy courses are dinosaurs in a modern digital…

  15. China.

    PubMed

    1981-01-01

    China's 3rd national census will belong to the era of modern census taking. Over 6 million enumerators will be involved along with 29 computers for data processing. The 3-year budget exceeds the equivalent of $135 million. A pilot census was taken in the city and country of Wuxi in Jiangsu province south of Shanghai during June 1980. Additional pilot censuses are to be conducted in the provinces beginning early in 1981. The full count is scheduled to be 1 year later on July 1, 1982. Results will be processed and made available by 1984 so that planners can utilize them in drafting the 5-year development plan for 1985-1990. The censuses of 1953 and 1964 yeilded little data by modern standards. The longterm objective of the Population Census Office is to build up a modern census taking capability. This will provide data for the formulation of population and development policies, programs to implement those policies, and family planning activities. Another longterm objective is to extend the new data processing system to 399 prefectures and 2168 counties in China. The equipment will be subsequently used in related research activities. For the current census, a complete organization of census offices, census working teams, and census working groups will be established at successive administrative levels down to neighborhood (urban) and brigade (rural) levels, beginning early in 1981. The full census will cover 29 provinces of China. Approximately 6 million enumerators will each cover about 30-40 households. 2 models of computer and corresponding data entry systems are being used: 8 Wang VS 2200 systems and 21 IBM 4300 series systems from the U.S. The United Nations Fund for Population Activities is supplying equipment and technical assistance for the entire census amounting to more than $15 million. The Population Census Office will analyze and publish the census data.

  16. Discovering epistasis in large scale genetic association studies by exploiting graphics cards.

    PubMed

    Chen, Gary K; Guo, Yunfei

    2013-12-03

    Despite the enormous investments made in collecting DNA samples and generating germline variation data across thousands of individuals in modern genome-wide association studies (GWAS), progress has been frustratingly slow in explaining much of the heritability in common disease. Today's paradigm of testing independent hypotheses on each single nucleotide polymorphism (SNP) marker is unlikely to adequately reflect the complex biological processes in disease risk. Alternatively, modeling risk as an ensemble of SNPs that act in concert in a pathway, and/or interact non-additively on log risk for example, may be a more sensible way to approach gene mapping in modern studies. Implementing such analyzes genome-wide can quickly become intractable due to the fact that even modest size SNP panels on modern genotype arrays (500k markers) pose a combinatorial nightmare, require tens of billions of models to be tested for evidence of interaction. In this article, we provide an in-depth analysis of programs that have been developed to explicitly overcome these enormous computational barriers through the use of processors on graphics cards known as Graphics Processing Units (GPU). We include tutorials on GPU technology, which will convey why they are growing in appeal with today's numerical scientists. One obvious advantage is the impressive density of microprocessor cores that are available on only a single GPU. Whereas high end servers feature up to 24 Intel or AMD CPU cores, the latest GPU offerings from nVidia feature over 2600 cores. Each compute node may be outfitted with up to 4 GPU devices. Success on GPUs varies across problems. However, epistasis screens fare well due to the high degree of parallelism exposed in these problems. Papers that we review routinely report GPU speedups of over two orders of magnitude (>100x) over standard CPU implementations.

  17. Discovering epistasis in large scale genetic association studies by exploiting graphics cards

    PubMed Central

    Chen, Gary K.; Guo, Yunfei

    2013-01-01

    Despite the enormous investments made in collecting DNA samples and generating germline variation data across thousands of individuals in modern genome-wide association studies (GWAS), progress has been frustratingly slow in explaining much of the heritability in common disease. Today's paradigm of testing independent hypotheses on each single nucleotide polymorphism (SNP) marker is unlikely to adequately reflect the complex biological processes in disease risk. Alternatively, modeling risk as an ensemble of SNPs that act in concert in a pathway, and/or interact non-additively on log risk for example, may be a more sensible way to approach gene mapping in modern studies. Implementing such analyzes genome-wide can quickly become intractable due to the fact that even modest size SNP panels on modern genotype arrays (500k markers) pose a combinatorial nightmare, require tens of billions of models to be tested for evidence of interaction. In this article, we provide an in-depth analysis of programs that have been developed to explicitly overcome these enormous computational barriers through the use of processors on graphics cards known as Graphics Processing Units (GPU). We include tutorials on GPU technology, which will convey why they are growing in appeal with today's numerical scientists. One obvious advantage is the impressive density of microprocessor cores that are available on only a single GPU. Whereas high end servers feature up to 24 Intel or AMD CPU cores, the latest GPU offerings from nVidia feature over 2600 cores. Each compute node may be outfitted with up to 4 GPU devices. Success on GPUs varies across problems. However, epistasis screens fare well due to the high degree of parallelism exposed in these problems. Papers that we review routinely report GPU speedups of over two orders of magnitude (>100x) over standard CPU implementations. PMID:24348518

  18. Hands-on approach to teaching Earth system sciences using a information-computational web-GIS portal "Climate"

    NASA Astrophysics Data System (ADS)

    Gordova, Yulia; Gorbatenko, Valentina; Martynova, Yulia; Shulgina, Tamara

    2014-05-01

    A problem of making education relevant to the workplace tasks is a key problem of higher education because old-school training programs are not keeping pace with the rapidly changing situation in the professional field of environmental sciences. A joint group of specialists from Tomsk State University and Siberian center for Environmental research and Training/IMCES SB RAS developed several new courses for students of "Climatology" and "Meteorology" specialties, which comprises theoretical knowledge from up-to-date environmental sciences with practical tasks. To organize the educational process we use an open-source course management system Moodle (www.moodle.org). It gave us an opportunity to combine text and multimedia in a theoretical part of educational courses. The hands-on approach is realized through development of innovative trainings which are performed within the information-computational platform "Climate" (http://climate.scert.ru/) using web GIS tools. These trainings contain practical tasks on climate modeling and climate changes assessment and analysis and should be performed using typical tools which are usually used by scientists performing such kind of research. Thus, students are engaged in n the use of modern tools of the geophysical data analysis and it cultivates dynamic of their professional learning. The hands-on approach can help us to fill in this gap because it is the only approach that offers experience, increases students involvement, advance the use of modern information and communication tools. The courses are implemented at Tomsk State University and help forming modern curriculum in Earth system science area. This work is partially supported by SB RAS project VIII.80.2.1, RFBR grants numbers 13-05-12034 and 14-05-00502.

  19. The China Factor in Japanese Military Modernization for the 21st Century

    DTIC Science & Technology

    1997-06-01

    Japan’s ongoing modernization of its forces, which are directed under its National Defense Program Outline and Midterm Defense Program, do not, however...directed under its National Defense Program Outline and Midterm Defense Program, do not, however, seem to be in reaction to any overt perception of a...It did so with the economic motive of acquiring a captive market for Japanese consumer goods, the strategic consideration of preempting Russian

  20. Computational Ecology and Open Science: Tools to Help Manage Lakes for Cyanobacteria in Lakes

    EPA Science Inventory

    Computational ecology is an interdisciplinary field that takes advantage of modern computation abilities to expand our ecological understanding. As computational ecologists, we use large data sets, which often cover large spatial extents, and advanced statistical/mathematical co...

  1. Introducing difference recurrence relations for faster semi-global alignment of long sequences.

    PubMed

    Suzuki, Hajime; Kasahara, Masahiro

    2018-02-19

    The read length of single-molecule DNA sequencers is reaching 1 Mb. Popular alignment software tools widely used for analyzing such long reads often take advantage of single-instruction multiple-data (SIMD) operations to accelerate calculation of dynamic programming (DP) matrices in the Smith-Waterman-Gotoh (SWG) algorithm with a fixed alignment start position at the origin. Nonetheless, 16-bit or 32-bit integers are necessary for storing the values in a DP matrix when sequences to be aligned are long; this situation hampers the use of the full SIMD width of modern processors. We proposed a faster semi-global alignment algorithm, "difference recurrence relations," that runs more rapidly than the state-of-the-art algorithm by a factor of 2.1. Instead of calculating and storing all the values in a DP matrix directly, our algorithm computes and stores mainly the differences between the values of adjacent cells in the matrix. Although the SWG algorithm and our algorithm can output exactly the same result, our algorithm mainly involves 8-bit integer operations, enabling us to exploit the full width of SIMD operations (e.g., 32) on modern processors. We also developed a library, libgaba, so that developers can easily integrate our algorithm into alignment programs. Our novel algorithm and optimized library implementation will facilitate accelerating nucleotide long-read analysis algorithms that use pairwise alignment stages. The library is implemented in the C programming language and available at https://github.com/ocxtal/libgaba .

  2. Modern Electronic Devices: An Increasingly Common Cause of Skin Disorders in Consumers.

    PubMed

    Corazza, Monica; Minghetti, Sara; Bertoldi, Alberto Maria; Martina, Emanuela; Virgili, Annarosa; Borghi, Alessandro

    2016-01-01

    : The modern conveniences and enjoyment brought about by electronic devices bring with them some health concerns. In particular, personal electronic devices are responsible for rising cases of several skin disorders, including pressure, friction, contact dermatitis, and other physical dermatitis. The universal use of such devices, either for work or recreational purposes, will probably increase the occurrence of polymorphous skin manifestations over time. It is important for clinicians to consider electronics as potential sources of dermatological ailments, for proper patient management. We performed a literature review on skin disorders associated with the personal use of modern technology, including personal computers and laptops, personal computer accessories, mobile phones, tablets, video games, and consoles.

  3. Power-plant modernization program in Latvia. Desk Study Report No. 1. Export trade information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-08-01

    The Government of Latvia has requested the U.S. Trade and Development Program's (TDP's) assistance in financing the cost of a feasibility study to develop a modernization program for its thermal power stations aimed at improving their performance and efficiency. The consultant will work with engineers and managers of Latvenergo, Latvia's power utility, to review the performance of the country's two thermal power stations and carry out a detailed study for the rehabilitation and modernization of the TEC-2 thermal power station in Riga. The overall goal of the program will be to maximize the output capacity of the country's two powermore » stations through the implementation of economically efficient rehabilitation projects.« less

  4. Bringing Legacy Visualization Software to Modern Computing Devices via Application Streaming

    NASA Astrophysics Data System (ADS)

    Fisher, Ward

    2014-05-01

    Planning software compatibility across forthcoming generations of computing platforms is a problem commonly encountered in software engineering and development. While this problem can affect any class of software, data analysis and visualization programs are particularly vulnerable. This is due in part to their inherent dependency on specialized hardware and computing environments. A number of strategies and tools have been designed to aid software engineers with this task. While generally embraced by developers at 'traditional' software companies, these methodologies are often dismissed by the scientific software community as unwieldy, inefficient and unnecessary. As a result, many important and storied scientific software packages can struggle to adapt to a new computing environment; for example, one in which much work is carried out on sub-laptop devices (such as tablets and smartphones). Rewriting these packages for a new platform often requires significant investment in terms of development time and developer expertise. In many cases, porting older software to modern devices is neither practical nor possible. As a result, replacement software must be developed from scratch, wasting resources better spent on other projects. Enabled largely by the rapid rise and adoption of cloud computing platforms, 'Application Streaming' technologies allow legacy visualization and analysis software to be operated wholly from a client device (be it laptop, tablet or smartphone) while retaining full functionality and interactivity. It mitigates much of the developer effort required by other more traditional methods while simultaneously reducing the time it takes to bring the software to a new platform. This work will provide an overview of Application Streaming and how it compares against other technologies which allow scientific visualization software to be executed from a remote computer. We will discuss the functionality and limitations of existing application streaming frameworks and how a developer might prepare their software for application streaming. We will also examine the secondary benefits realized by moving legacy software to the cloud. Finally, we will examine the process by which a legacy Java application, the Integrated Data Viewer (IDV), is to be adapted for tablet computing via Application Streaming.

  5. cudaMap: a GPU accelerated program for gene expression connectivity mapping

    PubMed Central

    2013-01-01

    Background Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. Results cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Conclusion Emerging ‘omics’ technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap. PMID:24112435

  6. Vids: Version 2.0 Alpha Visualization Engine

    DTIC Science & Technology

    2018-04-25

    fidelity than existing efforts. Vids is a project aimed at producing more dynamic and interactive visualization tools using modern computer game ...move through and interact with the data to improve informational understanding. The Vids software leverages off-the-shelf modern game development...analysis and correlations. Recently, an ARL-pioneered project named Virtual Reality Data Analysis Environment (VRDAE) used VR and a modern game engine

  7. Computing and Visualizing the Complex Dynamics of Earthquake Fault Systems: Towards Ensemble Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.

    2003-12-01

    We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.

  8. Demand generation activities and modern contraceptive use in urban areas of four countries: a longitudinal evaluation

    PubMed Central

    Speizer, Ilene S; Corroon, Meghan; Calhoun, Lisa; Lance, Peter; Montana, Livia; Nanda, Priya; Guilkey, David

    2014-01-01

    ABSTRACT Family planning is crucial for preventing unintended pregnancies and for improving maternal and child health and well-being. In urban areas where there are large inequities in family planning use, particularly among the urban poor, programs are needed to increase access to and use of contraception among those most in need. This paper presents the midterm evaluation findings of the Urban Reproductive Health Initiative (Urban RH Initiative) programs, funded by the Bill & Melinda Gates Foundation, that are being implemented in 4 countries: India (Uttar Pradesh), Kenya, Nigeria, and Senegal. Between 2010 and 2013, the Measurement, Learning & Evaluation (MLE) project collected baseline and 2-year longitudinal follow-up data from women in target study cities to examine the role of demand generation activities undertaken as part of the Urban RH Initiative programs. Evaluation results demonstrate that, in each country where it was measured, outreach by community health or family planning workers as well as local radio programs were significantly associated with increased use of modern contraceptive methods. In addition, in India and Nigeria, television programs had a significant effect on modern contraceptive use, and in Kenya and Nigeria, the program slogans and materials that were blanketed across the cities (eg, leaflets/brochures distributed at health clinics and the program logo placed on all forms of materials, from market umbrellas to health facility signs and television programs) were also significantly associated with modern method use. Our results show that targeted, multilevel demand generation activities can make an important contribution to increasing modern contraceptive use in urban areas and could impact Millennium Development Goals for improved maternal and child health and access to reproductive health for all. PMID:25611476

  9. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  10. CalcHEP 3.4 for collider physics within and beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Belyaev, Alexander; Christensen, Neil D.; Pukhov, Alexander

    2013-07-01

    We present version 3.4 of the CalcHEP software package which is designed for effective evaluation and simulation of high energy physics collider processes at parton level. The main features of CalcHEP are the computation of Feynman diagrams, integration over multi-particle phase space and event simulation at parton level. The principle attractive key-points along these lines are that it has: (a) an easy startup and usage even for those who are not familiar with CalcHEP and programming; (b) a friendly and convenient graphical user interface (GUI); (c) the option for the user to easily modify a model or introduce a new model by either using the graphical interface or by using an external package with the possibility of cross checking the results in different gauges; (d) a batch interface which allows to perform very complicated and tedious calculations connecting production and decay modes for processes with many particles in the final state. With this features set, CalcHEP can efficiently perform calculations with a high level of automation from a theory in the form of a Lagrangian down to phenomenology in the form of cross sections, parton level event simulation and various kinematical distributions. In this paper we report on the new features of CalcHEP 3.4 which improves the power of our package to be an effective tool for the study of modern collider phenomenology. Program summaryProgram title: CalcHEP Catalogue identifier: AEOV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 78535 No. of bytes in distributed program, including test data, etc.: 818061 Distribution format: tar.gz Programming language: C. Computer: PC, MAC, Unix Workstations. Operating system: Unix. RAM: Depends on process under study Classification: 4.4, 5. External routines: X11 Nature of problem: Implement new models of particle interactions. Generate Feynman diagrams for a physical process in any implemented theoretical model. Integrate phase space for Feynman diagrams to obtain cross sections or particle widths taking into account kinematical cuts. Simulate collisions at modern colliders and generate respective unweighted events. Mix events for different subprocesses and connect them with the decays of unstable particles. Solution method: Symbolic calculations. Squared Feynman diagram approach Vegas Monte Carlo algorithm. Restrictions: Up to 2→4 production (1→5 decay) processes are realistic on typical computers. Higher multiplicities sometimes possible for specific 2→5 and 2→6 processes. Unusual features: Graphical user interface, symbolic algebra calculation of squared matrix element, parallelization on a pbs cluster. Running time: Depends strongly on the process. For a typical 2→2 process it takes seconds. For 2→3 processes the typical running time is of the order of minutes. For higher multiplicities it could take much longer.

  11. Air Force Intercontinental Ballistic Missile Fuze Modernization (ICBM Fuze Mod)

    DTIC Science & Technology

    2015-12-01

    Selected Acquisition Report ( SAR ) RCS: DD-A&T(Q&A)823-498 Air Force Intercontinental Ballistic Missile Fuze Modernization (ICBM Fuze Mod) As of FY...2015 SAR March 17, 2016 09:03:03 UNCLASSIFIED 2 Table of Contents Common Acronyms and Abbreviations for MDAP Programs 3 Program Information...Acquisition Unit Cost ICBM Fuze Mod December 2015 SAR March 17, 2016 09:03:03 UNCLASSIFIED 3 PB - President’s Budget PE - Program Element PEO - Program

  12. The Role of Modern Control Theory in the Design of Controls for Aircraft Turbine Engines

    NASA Technical Reports Server (NTRS)

    Zeller, J.; Lehtinen, B.; Merrill, W.

    1982-01-01

    Accomplishments in applying Modern Control Theory to the design of controls for advanced aircraft turbine engines were reviewed. The results of successful research programs are discussed. Ongoing programs as well as planned or recommended future thrusts are also discussed.

  13. Teaching Modern Dance: A Conceptual Approach

    ERIC Educational Resources Information Center

    Enghauser, Rebecca Gose

    2008-01-01

    A conceptual approach to teaching modern dance can broaden the awareness and deepen the understanding of modern dance in the educational arena in general, and in dance education specifically. This article describes a unique program that dance teachers can use to introduce modern dance to novice dancers, as well as more experienced dancers,…

  14. Modelling the influence of noise of the image sensor for blood cells recognition in computer microscopy

    NASA Astrophysics Data System (ADS)

    Nikitaev, V. G.; Nagornov, O. V.; Pronichev, A. N.; Polyakov, E. V.; Dmitrieva, V. V.

    2017-12-01

    The first stage of diagnostics of blood cancer is the analysis of blood smears. The application of decision-making support systems would reduce the subjectivity of the diagnostic process and avoid errors, resulting in often irreversible changes in the patient's condition. In this regard, the solution of this problem requires the use of modern technology. One of the tools of the program classification of blood cells are texture features, and the task of finding informative among them is promising. The paper investigates the effect of noise of the image sensor to informative texture features with application of methods of mathematical modelling.

  15. Eigensystem realization algorithm user's guide forVAX/VMS computers: Version 931216

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.

    1994-01-01

    The eigensystem realization algorithm (ERA) is a multiple-input, multiple-output, time domain technique for structural modal identification and minimum-order system realization. Modal identification is the process of calculating structural eigenvalues and eigenvectors (natural vibration frequencies, damping, mode shapes, and modal masses) from experimental data. System realization is the process of constructing state-space dynamic models for modern control design. This user's guide documents VAX/VMS-based FORTRAN software developed by the author since 1984 in conjunction with many applications. It consists of a main ERA program and 66 pre- and post-processors. The software provides complete modal identification capabilities and most system realization capabilities.

  16. Extended precision data types for the development of the original computer aided engineering applications

    NASA Astrophysics Data System (ADS)

    Pescaru, A.; Oanta, E.; Axinte, T.; Dascalescu, A.-D.

    2015-11-01

    Computer aided engineering is based on models of the phenomena which are expressed as algorithms. The implementations of the algorithms are usually software applications which are processing a large volume of numerical data, regardless the size of the input data. In this way, the finite element method applications used to have an input data generator which was creating the entire volume of geometrical data, starting from the initial geometrical information and the parameters stored in the input data file. Moreover, there were several data processing stages, such as: renumbering of the nodes meant to minimize the size of the band length of the system of equations to be solved, computation of the equivalent nodal forces, computation of the element stiffness matrix, assemblation of system of equations, solving the system of equations, computation of the secondary variables. The modern software application use pre-processing and post-processing programs to easily handle the information. Beside this example, CAE applications use various stages of complex computation, being very interesting the accuracy of the final results. Along time, the development of CAE applications was a constant concern of the authors and the accuracy of the results was a very important target. The paper presents the various computing techniques which were imagined and implemented in the resulting applications: finite element method programs, finite difference element method programs, applied general numerical methods applications, data generators, graphical applications, experimental data reduction programs. In this context, the use of the extended precision data types was one of the solutions, the limitations being imposed by the size of the memory which may be allocated. To avoid the memory-related problems the data was stored in files. To minimize the execution time, part of the file was accessed using the dynamic memory allocation facilities. One of the most important consequences of the paper is the design of a library which includes the optimized solutions previously tested, that may be used for the easily development of original CAE cross-platform applications. Last but not least, beside the generality of the data type solutions, there is targeted the development of a software library which may be used for the easily development of node-based CAE applications, each node having several known or unknown parameters, the system of equations being automatically generated and solved.

  17. Introduction to Cosmology, Proceedings of the Polish Astronomical Society volume 4

    NASA Astrophysics Data System (ADS)

    Biernacka, Monika; Bajan, Katarzyna; Stachowski, Grzegorz; Pollo, Agnieszka

    2016-07-01

    On 11-23 July 2016, Jan Kochanowski University in Kielce was the host of the Second Cosmological School "Introduction to Cosmology". The main purpose of the School was to provide an introduction to a selection of the most interesting topics in modern cosmology, both in theory and observations. The program included a series of mini-workshops on cosmological simulations, Virtual Observatory database and tools and Spectral Energy Distribution tting. The School was intended for undergraduate, MSc and PhD students, as well as young postdoctoral researchers. The School was co-organized by the Polish Astronomical Society, the Jan Kochanowski University in Kielce, the Jagiellonian University in Cracow, the Nuclear Centre for Nuclear Research and the N. Copernicus Astronomical Center in Warsaw. The Interdisciplinary Centre for Mathematical and Computational Modeling kindly provided us with the possibility to remotely use their computing facilities.

  18. Introduction to Cosmology, Proceedings of the Polish Astronomical Society volume 4

    NASA Astrophysics Data System (ADS)

    Biernacka, Monika; Bajan, Katarzyna; Stachowski, Grzegorz; Pollo, Agnieszka

    2017-08-01

    On 11-23 July 2016, Jan Kochanowski University in Kielce was the host of the Second Cosmological School "Introduction to Cosmology". The main purpose of the School was to provide an introduction to a selection of the most interesting topics in modern cosmology, both in theory and observations. The program included a series of mini-workshops on cosmological simulations, Virtual Observatory database and tools and Spectral Energy Distribution tting. The School was intended for undergraduate, MSc and PhD students, as well as young postdoctoral researchers. The School was co-organized by the Polish Astronomical Society, the Jan Kochanowski University in Kielce, the Jagiellonian University in Cracow, the Nuclear Centre for Nuclear Research and the N. Copernicus Astronomical Center in Warsaw. The Interdisciplinary Centre for Mathematical and Computational Modeling kindly provided us with the possibility to remotely use their computing facilities.

  19. Nonlinear Aerodynamics and the Design of Wing Tips

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan

    1991-01-01

    The analysis and design of wing tips for fixed wing and rotary wing aircraft still remains part art, part science. Although the design of airfoil sections and basic planform geometry is well developed, the tip regions require more detailed consideration. This is important because of the strong impact of wing tip flow on wing drag; although the tip region constitutes a small portion of the wing, its effect on the drag can be significant. The induced drag of a wing is, for a given lift and speed, inversely proportional to the square of the wing span. Concepts are proposed as a means of reducing drag. Modern computational methods provide a tool for studying these issues in greater detail. The purpose of the current research program is to improve the understanding of the fundamental issues involved in the design of wing tips and to develop the range of computational and experimental tools needed for further study of these ideas.

  20. NORTICA—a new code for cyclotron analysis

    NASA Astrophysics Data System (ADS)

    Gorelov, D.; Johnson, D.; Marti, F.

    2001-12-01

    The new package NORTICA (Numerical ORbit Tracking In Cyclotrons with Analysis) of computer codes for beam dynamics simulations is under development at NSCL. The package was started as a replacement for the code MONSTER [1] developed in the laboratory in the past. The new codes are capable of beam dynamics simulations in both CCF (Coupled Cyclotron Facility) accelerators, the K500 and K1200 superconducting cyclotrons. The general purpose of this package is assisting in setting and tuning the cyclotrons taking into account the main field and extraction channel imperfections. The computer platform for the package is Alpha Station with UNIX operating system and X-Windows graphic interface. A multiple programming language approach was used in order to combine the reliability of the numerical algorithms developed over the long period of time in the laboratory and the friendliness of modern style user interface. This paper describes the capability and features of the codes in the present state.

  1. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  2. Laboratory Mathematics

    ERIC Educational Resources Information Center

    de Mestre, Neville

    2004-01-01

    Computers were invented to help mathematicians perform long and complicated calculations more efficiently. By the time that a computing area became a familiar space in primary and secondary schools, the initial motivation for computer use had been submerged in the many other functions that modern computers now accomplish. Not only the mathematics…

  3. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and securitymore » vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.« less

  4. Re-Computation of Numerical Results Contained in NACA Report No. 496

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III

    2015-01-01

    An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.

  5. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  7. 78 FR 49126 - Modernizing the FCC Form 477 Data Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-13

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Parts 0, 1, and 43 [WC Docket No. 11-10; FCC 13-87] Modernizing the FCC Form 477 Data Program AGENCY: Federal Communications Commission. ACTION: Final rule. SUMMARY: The Report and Order revises the Federal Communications Commission's Form 477 collection to...

  8. Personal and Family Survival. Civil Defense Adult Education; Teacher's Manual.

    ERIC Educational Resources Information Center

    Office of Civil Defense (DOD), Washington, DC.

    A manual intended as an instructor's aid in presenting a Civil Defense Adult Education Course is presented. It contains 10 lesson plans: Course Introduction, Modern Weapons and Radioactive Fallout (Effects), Modern Weapons and Radioactive Fallout (Protection), National Civil Defense Program, National Shelter Program (Community Shelters), National…

  9. 76 FR 10827 - Modernizing the FCC Form 477 Data Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-28

    ... Congress, the Commission, other policy makers, and consumers.'' The Commission designed the program as a... Initiative, designed to modernize and streamline how the Commission collects, uses, and disseminates data. As... burden for filers, for example, in the design of the collection or in tools the Commission can provide...

  10. Army/NASA small turboshaft engine digital controls research program

    NASA Technical Reports Server (NTRS)

    Sellers, J. F.; Baez, A. N.

    1981-01-01

    The emphasis of a program to conduct digital controls research for small turboshaft engines is on engine test evaluation of advanced control logic using a flexible microprocessor based digital control system designed specifically for research on advanced control logic. Control software is stored in programmable memory. New control algorithms may be stored in a floppy disk and loaded directly into memory. This feature facilitates comparative evaluation of different advanced control modes. The central processor in the digital control is an Intel 8086 16 bit microprocessor. Control software is programmed in assembly language. Software checkout is accomplished prior to engine test by connecting the digital control to a real time hybrid computer simulation of the engine. The engine currently installed in the facility has a hydromechanical control modified to allow electrohydraulic fuel metering and VG actuation by the digital control. Simulation results are presented which show that the modern control reduces the transient rotor speed droop caused by unanticipated load changes such as cyclic pitch or wind gust transients.

  11. CAN-DO, CFD-based Aerodynamic Nozzle Design and Optimization program for supersonic/hypersonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Korte, John J.; Kumar, Ajay; Singh, D. J.; White, J. A.

    1992-01-01

    A design program is developed which incorporates a modern approach to the design of supersonic/hypersonic wind-tunnel nozzles. The approach is obtained by the coupling of computational fluid dynamics (CFD) with design optimization. The program can be used to design a 2D or axisymmetric, supersonic or hypersonic, wind-tunnel nozzles that can be modeled with a calorically perfect gas. The nozzle design is obtained by solving a nonlinear least-squares optimization problem (LSOP). The LSOP is solved using an iterative procedure which requires intermediate flowfield solutions. The nozzle flowfield is simulated by solving the Navier-Stokes equations for the subsonic and transonic flow regions and the parabolized Navier-Stokes equations for the supersonic flow regions. The advantages of this method are that the design is based on the solution of the viscous equations eliminating the need to make separate corrections to a design contour, and the flexibility of applying the procedure to different types of nozzle design problems.

  12. Image analysis in modern ophthalmology: from acquisition to computer assisted diagnosis and telemedicine

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip

    2012-06-01

    Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.

  13. LCFM - LIVING COLOR FRAME MAKER: PC GRAPHICS GENERATION AND MANAGEMENT TOOL FOR REAL-TIME APPLICATIONS

    NASA Technical Reports Server (NTRS)

    Truong, L. V.

    1994-01-01

    Computer graphics are often applied for better understanding and interpretation of data under observation. These graphics become more complicated when animation is required during "run-time", as found in many typical modern artificial intelligence and expert systems. Living Color Frame Maker is a solution to many of these real-time graphics problems. Living Color Frame Maker (LCFM) is a graphics generation and management tool for IBM or IBM compatible personal computers. To eliminate graphics programming, the graphic designer can use LCFM to generate computer graphics frames. The graphical frames are then saved as text files, in a readable and disclosed format, which can be easily accessed and manipulated by user programs for a wide range of "real-time" visual information applications. For example, LCFM can be implemented in a frame-based expert system for visual aids in management of systems. For monitoring, diagnosis, and/or controlling purposes, circuit or systems diagrams can be brought to "life" by using designated video colors and intensities to symbolize the status of hardware components (via real-time feedback from sensors). Thus status of the system itself can be displayed. The Living Color Frame Maker is user friendly with graphical interfaces, and provides on-line help instructions. All options are executed using mouse commands and are displayed on a single menu for fast and easy operation. LCFM is written in C++ using the Borland C++ 2.0 compiler for IBM PC series computers and compatible computers running MS-DOS. The program requires a mouse and an EGA/VGA display. A minimum of 77K of RAM is also required for execution. The documentation is provided in electronic form on the distribution medium in WordPerfect format. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The Living Color Frame Maker tool was developed in 1992.

  14. Alliance for Excellent Education's Focused Comments on Issues Raised in Notice of Proposed Rulemaking for Modernizing the E-Rate Program

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2014

    2014-01-01

    In comments submitted April 7, 2014, the Alliance for Excellent Education called on the Federal Communications Commission (FCC) to modernize the federal E-rate program in order to lay the foundation for expanding the program through increased funding. These comments are in response to an E-rate Public Notice issued by the FCC on March 6, 2014…

  15. STORM WATER MANAGEMENT MODEL (SWMM) MODERNIZATION

    EPA Science Inventory

    The U.S. Environmental Protection Agency's Water Supply and Water Resources Division in partnership with the consulting firm of CDM to redevelop and modernize the Storm Water Management Model (SWMM). In the initial phase of this project EPA rewrote SWMM's computational engine usi...

  16. A History of Computer Numerical Control.

    ERIC Educational Resources Information Center

    Haggen, Gilbert L.

    Computer numerical control (CNC) has evolved from the first significant counting method--the abacus. Babbage had perhaps the greatest impact on the development of modern day computers with his analytical engine. Hollerith's functioning machine with punched cards was used in tabulating the 1890 U.S. Census. In order for computers to become a…

  17. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  18. Accelerating numerical solution of stochastic differential equations with CUDA

    NASA Astrophysics Data System (ADS)

    Januszewski, M.; Kostur, M.

    2010-01-01

    Numerical integration of stochastic differential equations is commonly used in many branches of science. In this paper we present how to accelerate this kind of numerical calculations with popular NVIDIA Graphics Processing Units using the CUDA programming environment. We address general aspects of numerical programming on stream processors and illustrate them by two examples: the noisy phase dynamics in a Josephson junction and the noisy Kuramoto model. In presented cases the measured speedup can be as high as 675× compared to a typical CPU, which corresponds to several billion integration steps per second. This means that calculations which took weeks can now be completed in less than one hour. This brings stochastic simulation to a completely new level, opening for research a whole new range of problems which can now be solved interactively. Program summaryProgram title: SDE Catalogue identifier: AEFG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu GPL v3 No. of lines in distributed program, including test data, etc.: 978 No. of bytes in distributed program, including test data, etc.: 5905 Distribution format: tar.gz Programming language: CUDA C Computer: any system with a CUDA-compatible GPU Operating system: Linux RAM: 64 MB of GPU memory Classification: 4.3 External routines: The program requires the NVIDIA CUDA Toolkit Version 2.0 or newer and the GNU Scientific Library v1.0 or newer. Optionally gnuplot is recommended for quick visualization of the results. Nature of problem: Direct numerical integration of stochastic differential equations is a computationally intensive problem, due to the necessity of calculating multiple independent realizations of the system. We exploit the inherent parallelism of this problem and perform the calculations on GPUs using the CUDA programming environment. The GPU's ability to execute hundreds of threads simultaneously makes it possible to speed up the computation by over two orders of magnitude, compared to a typical modern CPU. Solution method: The stochastic Runge-Kutta method of the second order is applied to integrate the equation of motion. Ensemble-averaged quantities of interest are obtained through averaging over multiple independent realizations of the system. Unusual features: The numerical solution of the stochastic differential equations in question is performed on a GPU using the CUDA environment. Running time: < 1 minute

  19. WinTRAX: A raytracing software package for the design of multipole focusing systems

    NASA Astrophysics Data System (ADS)

    Grime, G. W.

    2013-07-01

    The software package TRAX was a simulation tool for modelling the path of charged particles through linear cylindrical multipole fields described by analytical expressions and was a development of the earlier OXRAY program (Grime and Watt, 1983; Grime et al., 1982) [1,2]. In a 2005 comparison of raytracing software packages (Incerti et al., 2005) [3], TRAX/OXRAY was compared with Geant4 and Zgoubi and was found to give close agreement with the more modern codes. TRAX was a text-based program which was only available for operation in a now rare VMS workstation environment, so a new program, WinTRAX, has been developed for the Windows operating system. This implements the same basic computing strategy as TRAX, and key sections of the code are direct translations from FORTRAN to C++, but the Windows environment is exploited to make an intuitive graphical user interface which simplifies and enhances many operations including system definition and storage, optimisation, beam simulation (including with misaligned elements) and aberration coefficient determination. This paper describes the program and presents comparisons with other software and real installations.

  20. Maximizing the U.S. Army’s Future Contribution to Global Security Using the Capability Portfolio Analysis Tool (CPAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Scott J.; Edwards, Shatiel B.; Teper, Gerald E.

    We report that recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25–35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning usingmore » a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army’s acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. Ultimately, more than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.« less

  1. Maximizing the U.S. Army’s Future Contribution to Global Security Using the Capability Portfolio Analysis Tool (CPAT)

    DOE PAGES

    Davis, Scott J.; Edwards, Shatiel B.; Teper, Gerald E.; ...

    2016-02-01

    We report that recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25–35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning usingmore » a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army’s acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. Ultimately, more than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.« less

  2. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    PubMed

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  3. Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis

    PubMed Central

    Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.

    2014-01-01

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300

  4. New generation of docking programs: Supercomputer validation of force fields and quantum-chemical methods for docking.

    PubMed

    Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B

    2017-11-01

    Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Kleene Monads: Handling Iteration in a Framework of Generic Effects

    NASA Astrophysics Data System (ADS)

    Goncharov, Sergey; Schröder, Lutz; Mossakowski, Till

    Monads are a well-established tool for modelling various computational effects. They form the semantic basis of Moggi’s computational metalanguage, the metalanguage of effects for short, which made its way into modern functional programming in the shape of Haskell’s do-notation. Standard computational idioms call for specific classes of monads that support additional control operations. Here, we introduce Kleene monads, which additionally feature nondeterministic choice and Kleene star, i.e. nondeterministic iteration, and we provide a metalanguage and a sound calculus for Kleene monads, the metalanguage of control and effects, which is the natural joint extension of Kleene algebra and the metalanguage of effects. This provides a framework for studying abstract program equality focussing on iteration and effects. These aspects are known to have decidable equational theories when studied in isolation. However, it is well known that decidability breaks easily; e.g. the Horn theory of continuous Kleene algebras fails to be recursively enumerable. Here, we prove several negative results for the metalanguage of control and effects; in particular, already the equational theory of the unrestricted metalanguage of control and effects over continuous Kleene monads fails to be recursively enumerable. We proceed to identify a fragment of this language which still contains both Kleene algebra and the metalanguage of effects and for which the natural axiomatisation is complete, and indeed the equational theory is decidable.

  6. Mantle Convection on Modern Supercomputers

    NASA Astrophysics Data System (ADS)

    Weismüller, J.; Gmeiner, B.; Huber, M.; John, L.; Mohr, M.; Rüde, U.; Wohlmuth, B.; Bunge, H. P.

    2015-12-01

    Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures is handled successfully only in an interdisciplinary context. A new priority program - named SPPEXA - by the German Research Foundation (DFG) addresses this issue, and brings together computer scientists, mathematicians and application scientists around grand challenges in HPC. Here we report from the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection and assess the impact of small scale processes on global mantle flow.

  7. Shared Memory Parallelism for 3D Cartesian Discrete Ordinates Solver

    NASA Astrophysics Data System (ADS)

    Moustafa, Salli; Dutka-Malen, Ivan; Plagne, Laurent; Ponçot, Angélique; Ramet, Pierre

    2014-06-01

    This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multicore+SIMD) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46 × 106 spatial cells and 1 × 1012 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40:74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool.

  8. Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis

    PubMed Central

    Steele, Joe; Bastola, Dhundy

    2014-01-01

    Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base–base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel–Ziv techniques from data compression. PMID:23904502

  9. Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale

    PubMed Central

    Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason

    2017-01-01

    With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft’s FPGA deployment in its Bing search engine and Intel’s 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems—like Apache Spark and Hadoop—to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster. PMID:28317049

  10. Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale.

    PubMed

    Huang, Muhuan; Wu, Di; Yu, Cody Hao; Fang, Zhenman; Interlandi, Matteo; Condie, Tyson; Cong, Jason

    2016-10-01

    With the end of CPU core scaling due to dark silicon limitations, customized accelerators on FPGAs have gained increased attention in modern datacenters due to their lower power, high performance and energy efficiency. Evidenced by Microsoft's FPGA deployment in its Bing search engine and Intel's 16.7 billion acquisition of Altera, integrating FPGAs into datacenters is considered one of the most promising approaches to sustain future datacenter growth. However, it is quite challenging for existing big data computing systems-like Apache Spark and Hadoop-to access the performance and energy benefits of FPGA accelerators. In this paper we design and implement Blaze to provide programming and runtime support for enabling easy and efficient deployments of FPGA accelerators in datacenters. In particular, Blaze abstracts FPGA accelerators as a service (FaaS) and provides a set of clean programming APIs for big data processing applications to easily utilize those accelerators. Our Blaze runtime implements an FaaS framework to efficiently share FPGA accelerators among multiple heterogeneous threads on a single node, and extends Hadoop YARN with accelerator-centric scheduling to efficiently share them among multiple computing tasks in the cluster. Experimental results using four representative big data applications demonstrate that Blaze greatly reduces the programming efforts to access FPGA accelerators in systems like Apache Spark and YARN, and improves the system throughput by 1.7 × to 3× (and energy efficiency by 1.5× to 2.7×) compared to a conventional CPU-only cluster.

  11. Battlefield awareness computers: the engine of battlefield digitization

    NASA Astrophysics Data System (ADS)

    Ho, Jackson; Chamseddine, Ahmad

    1997-06-01

    To modernize the army for the 21st century, the U.S. Army Digitization Office (ADO) initiated in 1995 the Force XXI Battle Command Brigade-and-Below (FBCB2) Applique program which became a centerpiece in the U.S. Army's master plan to win future information wars. The Applique team led by TRW fielded a 'tactical Internet' for Brigade and below command to demonstrate the advantages of 'shared situation awareness' and battlefield digitization in advanced war-fighting experiments (AWE) to be conducted in March 1997 at the Army's National Training Center in California. Computing Devices is designated the primary hardware developer for the militarized version of the battlefield awareness computers. The first generation of militarized battlefield awareness computer, designated as the V3 computer, was an integration of off-the-shelf components developed to meet the agressive delivery requirements of the Task Force XXI AWE. The design efficiency and cost effectiveness of the computer hardware were secondary in importance to delivery deadlines imposed by the March 1997 AWE. However, declining defense budgets will impose cost constraints on the Force XXI production hardware that can only be met by rigorous value engineering to further improve design optimization for battlefield awareness without compromising the level of reliability the military has come to expect in modern military hardened vetronics. To answer the Army's needs for a more cost effective computing solution, Computing Devices developed a second generation 'combat ready' battlefield awareness computer, designated the V3+, which is designed specifically to meet the upcoming demands of Force XXI (FBCB2) and beyond. The primary design objective is to achieve a technologically superior design, value engineered to strike an optimal balance between reliability, life cycle cost, and procurement cost. Recognizing that the diverse digitization demands of Force XXI cannot be adequately met by any one computer hardware solution, Computing Devices is planning to develop a notebook sized military computer designed for space limited vehicle-mounted applications, as well as a high-performance portable workstation equipped with a 19', full color, ultra-high resolution and high brightness active matrix liquid crystal display (AMLCD) targeting the command posts and tactical operations centers (TOC) applications. Together with the wearable computers Computing Devices developed at the Minneapolis facility for dismounted soldiers, Computing Devices will have a complete suite of interoperable battlefield awareness computers spanning the entire spectrum of battle digitization operating environments. Although this paper's primary focus is on a second generation 'combat ready' battlefield awareness computer or the V3+, this paper also briefly discusses the extension of the V3+ architecture to address the needs of the embedded and command post applications.3080

  12. Ecological Modernization and the US Farm Bill: The Case of the Conservation Security Program

    ERIC Educational Resources Information Center

    Lenihan, Martin H.; Brasier, Kathryn J.

    2010-01-01

    This paper examines the debate surrounding the inception of the Conservation Security Program (CSP) under the 2002 US Farm Bill as a possible expression of ecological modernization by examining the discursive contributions made by official actors, social movement organizations, and producer organizations. Based on this analysis, the CSP embodies…

  13. Understanding the Impacts of the Medicare Modernization Act: Concerns of Congressional Staff

    ERIC Educational Resources Information Center

    Mueller, Keith J.; Coburn, Andrew F.; MacKinney, Clinton; McBride, Timothy D.; Slifkin, Rebecca T.; Wakefield, Mary K.

    2005-01-01

    Sweeping changes to the Medicare program embodied in the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA), including a new prescription drug benefit, changes in payment policies, and reform of the Medicare managed-care program, have major implications for rural health care. The most efficient mechanism for research to…

  14. NEW GRADUATE PROGRAMS IN MODERN FOREIGN LANGUAGES, WHY THEY ARE NEEDED.

    ERIC Educational Resources Information Center

    TURNER, DAYMOND E., JR.

    ADDITIONAL DOCTORAL PROGRAMS ARE NEEDED IN MODERN FOREIGN LANGUAGES. CURRENT PRODUCTION OF GRADUATE DEGREES APPEARS SCARCELY ADEQUATE FOR REPLACING FACULTY WHO ANNUALLY LEAVE TEACHING BECAUSE OF DEATH, ILLNESS, RETIREMENT, OR CHANGE OF VOCATION. THE SUPPLY WILL HARDLY KEEP PACE WITH THE DEMAND CREATED BY THE ESTABLISHMENT OF NEW INSTITUTIONS OF…

  15. PyNEST: A Convenient Interface to the NEST Simulator.

    PubMed

    Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver

    2008-01-01

    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 10(4) neurons and 10(7) to 10(9) synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used.

  16. PyNEST: A Convenient Interface to the NEST Simulator

    PubMed Central

    Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver

    2008-01-01

    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 104 neurons and 107 to 109 synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used. PMID:19198667

  17. A High Performance VLSI Computer Architecture For Computer Graphics

    NASA Astrophysics Data System (ADS)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  18. Medical Signal-Conditioning and Data-Interface System

    NASA Technical Reports Server (NTRS)

    Braun, Jeffrey; Jacobus, charles; Booth, Scott; Suarez, Michael; Smith, Derek; Hartnagle, Jeffrey; LePrell, Glenn

    2006-01-01

    A general-purpose portable, wearable electronic signal-conditioning and data-interface system is being developed for medical applications. The system can acquire multiple physiological signals (e.g., electrocardiographic, electroencephalographic, and electromyographic signals) from sensors on the wearer s body, digitize those signals that are received in analog form, preprocess the resulting data, and transmit the data to one or more remote location(s) via a radiocommunication link and/or the Internet. The system includes a computer running data-object-oriented software that can be programmed to configure the system to accept almost any analog or digital input signals from medical devices. The computing hardware and software implement a general-purpose data-routing-and-encapsulation architecture that supports tagging of input data and routing the data in a standardized way through the Internet and other modern packet-switching networks to one or more computer(s) for review by physicians. The architecture supports multiple-site buffering of data for redundancy and reliability, and supports both real-time and slower-than-real-time collection, routing, and viewing of signal data. Routing and viewing stations support insertion of automated analysis routines to aid in encoding, analysis, viewing, and diagnosis.

  19. Application of web-GIS approach for climate change study

    NASA Astrophysics Data System (ADS)

    Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander; Bogomolov, Vasily; Martynova, Yuliya; Shulgina, Tamara

    2013-04-01

    Georeferenced datasets are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated web-GIS information-computational system for analysis of georeferenced climatological and meteorological data has been created. It is based on OGC standards and involves many modern solutions such as object-oriented programming model, modular composition, and JavaScript libraries based on GeoExt library, ExtJS Framework and OpenLayers software. The main advantage of the system lies in a possibility to perform mathematical and statistical data analysis, graphical visualization of results with GIS-functionality, and to prepare binary output files with just only a modern graphical web-browser installed on a common desktop computer connected to Internet. Several geophysical datasets represented by two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, DWD Global Precipitation Climatology Centre's data, GMAO Modern Era-Retrospective analysis for Research and Applications, meteorological observational data for the territory of the former USSR for the 20th century, results of modeling by global and regional climatological models, and others are available for processing by the system. And this list is extending. Also a functionality to run WRF and "Planet simulator" models was implemented in the system. Due to many preset parameters and limited time and spatial ranges set in the system these models have low computational power requirements and could be used in educational workflow for better understanding of basic climatological and meteorological processes. The Web-GIS information-computational system for geophysical data analysis provides specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified web-interface in a common graphical web-browser. This work is partially supported by the Ministry of education and science of the Russian Federation (contract #8345), SB RAS project VIII.80.2.1, RFBR grant #11-05-01190a, and integrated project SB RAS #131.

  20. High performance flight computer developed for deep space applications

    NASA Technical Reports Server (NTRS)

    Bunker, Robert L.

    1993-01-01

    The development of an advanced space flight computer for real time embedded deep space applications which embodies the lessons learned on Galileo and modern computer technology is described. The requirements are listed and the design implementation that meets those requirements is described. The development of SPACE-16 (Spaceborne Advanced Computing Engine) (where 16 designates the databus width) was initiated to support the MM2 (Marine Mark 2) project. The computer is based on a radiation hardened emulation of a modern 32 bit microprocessor and its family of support devices including a high performance floating point accelerator. Additional custom devices which include a coprocessor to improve input/output capabilities, a memory interface chip, and an additional support chip that provide management of all fault tolerant features, are described. Detailed supporting analyses and rationale which justifies specific design and architectural decisions are provided. The six chip types were designed and fabricated. Testing and evaluation of a brass/board was initiated.

  1. The Jupyter/IPython architecture: a unified view of computational research, from interactive exploration to communication and publication.

    NASA Astrophysics Data System (ADS)

    Ragan-Kelley, M.; Perez, F.; Granger, B.; Kluyver, T.; Ivanov, P.; Frederic, J.; Bussonnier, M.

    2014-12-01

    IPython has provided terminal-based tools for interactive computing in Python since 2001. The notebook document format and multi-process architecture introduced in 2011 have expanded the applicable scope of IPython into teaching, presenting, and sharing computational work, in addition to interactive exploration. The new architecture also allows users to work in any language, with implementations in Python, R, Julia, Haskell, and several other languages. The language agnostic parts of IPython have been renamed to Jupyter, to better capture the notion that a cross-language design can encapsulate commonalities present in computational research regardless of the programming language being used. This architecture offers components like the web-based Notebook interface, that supports rich documents that combine code and computational results with text narratives, mathematics, images, video and any media that a modern browser can display. This interface can be used not only in research, but also for publication and education, as notebooks can be converted to a variety of output formats, including HTML and PDF. Recent developments in the Jupyter project include a multi-user environment for hosting notebooks for a class or research group, a live collaboration notebook via Google Docs, and better support for languages other than Python.

  2. Computer-based, Jeopardy™-like game in general chemistry for engineering majors

    NASA Astrophysics Data System (ADS)

    Ling, S. S.; Saffre, F.; Kadadha, M.; Gater, D. L.; Isakovic, A. F.

    2013-03-01

    We report on the design of Jeopardy™-like computer game for enhancement of learning of general chemistry for engineering majors. While we examine several parameters of student achievement and attitude, our primary concern is addressing the motivation of students, which tends to be low in a traditionally run chemistry lectures. The effect of the game-playing is tested by comparing paper-based game quiz, which constitutes a control group, and computer-based game quiz, constituting a treatment group. Computer-based game quizzes are Java™-based applications that students run once a week in the second part of the last lecture of the week. Overall effectiveness of the semester-long program is measured through pretest-postest conceptual testing of general chemistry. The objective of this research is to determine to what extent this ``gamification'' of the course delivery and course evaluation processes may be beneficial to the undergraduates' learning of science in general, and chemistry in particular. We present data addressing gender-specific difference in performance, as well as background (pre-college) level of general science and chemistry preparation. We outline the plan how to extend such approach to general physics courses and to modern science driven electives, and we offer live, in-lectures examples of our computer gaming experience. We acknowledge support from Khalifa University, Abu Dhabi

  3. Regional Land Use Mapping: the Phoenix Pilot Project

    NASA Technical Reports Server (NTRS)

    Anderson, J. R.; Place, J. L.

    1971-01-01

    The Phoenix Pilot Program has been designed to make effective use of past experience in making land use maps and collecting land use information. Conclusions reached from the project are: (1) Land use maps and accompanying statistical information of reasonable accuracy and quality can be compiled at a scale of 1:250,000 from orbital imagery. (2) Orbital imagery used in conjunction with other sources of information when available can significantly enhance the collection and analysis of land use information. (3) Orbital imagery combined with modern computer technology will help resolve the problem of obtaining land use data quickly and on a regular basis, which will greatly enhance the usefulness of such data in regional planning, land management, and other applied programs. (4) Agreement on a framework or scheme of land use classification for use with orbital imagery will be necessary for effective use of land use data.

  4. InSAR Scientific Computing Environment - The Home Stretch

    NASA Astrophysics Data System (ADS)

    Rosen, P. A.; Gurrola, E. M.; Sacco, G.; Zebker, H. A.

    2011-12-01

    The Interferometric Synthetic Aperture Radar (InSAR) Scientific Computing Environment (ISCE) is a software development effort in its third and final year within the NASA Advanced Information Systems and Technology program. The ISCE is a new computing environment for geodetic image processing for InSAR sensors enabling scientists to reduce measurements directly from radar satellites to new geophysical products with relative ease. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. Upcoming international SAR missions will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment has the functionality to become a key element in processing data from NASA's proposed DESDynI mission into higher level data products, supporting a new class of analyses that take advantage of the long time and large spatial scales of these new data. At the core of ISCE is a new set of efficient and accurate InSAR algorithms. These algorithms are placed into an object-oriented, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models. The environment is designed to easily allow user contributions, enabling an open source community to extend the framework into the indefinite future. ISCE supports data from nearly all of the available satellite platforms, including ERS, EnviSAT, Radarsat-1, Radarsat-2, ALOS, TerraSAR-X, and Cosmo-SkyMed. The code applies a number of parallelization techniques and sensible approximations for speed. It is configured to work on modern linux-based computers with gcc compilers and python. ISCE is now a complete, functional package, under configuration management, and with extensive documentation and tested use cases appropriate to geodetic imaging applications. The software has been tested with canonical simulated radar data ("point targets") as well as with a variety of existing satellite data, cross-compared with other software packages. Its extensibility has already been proven by the straightforward addition of polarimetric processing and calibration, and derived filtering and estimation routines associated with polarimetry that supplement the original InSAR geodetic functionality. As of October 2011, the software is available for non-commercial use through UNAVCO's WinSAR consortium.

  5. SLAM, a Mathematica interface for SUSY spectrum generators

    NASA Astrophysics Data System (ADS)

    Marquard, Peter; Zerf, Nikolai

    2014-03-01

    We present and publish a Mathematica package, which can be used to automatically obtain any numerical MSSM input parameter from SUSY spectrum generators, which follow the SLHA standard, like SPheno, SOFTSUSY, SuSeFLAV or Suspect. The package enables a very comfortable way of numerical evaluations within the MSSM using Mathematica. It implements easy to use predefined high scale and low scale scenarios like mSUGRA or mhmax and if needed enables the user to directly specify the input required by the spectrum generators. In addition it supports an automatic saving and loading of SUSY spectra to and from a SQL data base, avoiding the rerun of a spectrum generator for a known spectrum. Catalogue identifier: AERX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERX_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4387 No. of bytes in distributed program, including test data, etc.: 37748 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer where Mathematica version 6 or higher is running providing bash and sed. Operating system: Linux. Classification: 11.1. External routines: A SUSY spectrum generator such as SPheno, SOFTSUSY, SuSeFLAV or SUSPECT Nature of problem: Interfacing published spectrum generators for automated creation, saving and loading of SUSY particle spectra. Solution method: SLAM automatically writes/reads SLHA spectrum generator input/output and is able to save/load generated data in/from a data base. Restrictions: No general restrictions, specific restrictions are given in the manuscript. Running time: A single spectrum calculation takes much less than one second on a modern PC.

  6. Diffraction scattering computed tomography: a window into the structures of complex nanomaterials

    PubMed Central

    Birkbak, M. E.; Leemreize, H.; Frølich, S.; Stock, S. R.

    2015-01-01

    Modern functional nanomaterials and devices are increasingly composed of multiple phases arranged in three dimensions over several length scales. Therefore there is a pressing demand for improved methods for structural characterization of such complex materials. An excellent emerging technique that addresses this problem is diffraction/scattering computed tomography (DSCT). DSCT combines the merits of diffraction and/or small angle scattering with computed tomography to allow imaging the interior of materials based on the diffraction or small angle scattering signals. This allows, e.g., one to distinguish the distributions of polymorphs in complex mixtures. Here we review this technique and give examples of how it can shed light on modern nanoscale materials. PMID:26505175

  7. SpecPad: device-independent NMR data visualization and processing based on the novel DART programming language and Html5 Web technology.

    PubMed

    Guigas, Bruno

    2017-09-01

    SpecPad is a new device-independent software program for the visualization and processing of one-dimensional and two-dimensional nuclear magnetic resonance (NMR) time domain (FID) and frequency domain (spectrum) data. It is the result of a project to investigate whether the novel programming language DART, in combination with Html5 Web technology, forms a suitable base to write an NMR data evaluation software which runs on modern computing devices such as Android, iOS, and Windows tablets as well as on Windows, Linux, and Mac OS X desktop PCs and notebooks. Another topic of interest is whether this technique also effectively supports the required sophisticated graphical and computational algorithms. SpecPad is device-independent because DART's compiled executable code is JavaScript and can, therefore, be run by the browsers of PCs and tablets. Because of Html5 browser cache technology, SpecPad may be operated off-line. Network access is only required during data import or export, e.g. via a Cloud service, or for software updates. A professional and easy to use graphical user interface consistent across all hardware platforms supports touch screen features on mobile devices for zooming and panning and for NMR-related interactive operations such as phasing, integration, peak picking, or atom assignment. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. A Specification for a Godunov-type Eulerian 2-D Hydrocode, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nystrom, William D; Robey, Jonathan M

    2012-05-01

    The purpose of this code specification is to describe an algorithm for solving the Euler equations of hydrodynamics in a 2D rectangular region in sufficient detail to allow a software developer to produce an implementation on their target platform using their programming language of choice without requiring detailed knowledge and experience in the field of computational fluid dynamics. It should be possible for a software developer who is proficient in the programming language of choice and is knowledgable of the target hardware to produce an efficient implementation of this specification if they also possess a thorough working knowledge of parallelmore » programming and have some experience in scientific programming using fields and meshes. On modern architectures, it will be important to focus on issues related to the exploitation of the fine grain parallelism and data locality present in this algorithm. This specification aims to make that task easier by presenting the essential details of the algorithm in a systematic and language neutral manner while also avoiding the inclusion of implementation details that would likely be specific to a particular type of programming paradigm or platform architecture.« less

  9. Assessing Core Competencies

    NASA Astrophysics Data System (ADS)

    Narayanan, M.

    2004-12-01

    Catherine Palomba and Trudy Banta offer the following definition of assessment, adapted from one provided by Marches in 1987. Assessment in the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development. (Palomba and Banta 1999). It is widely recognized that sophisticated computing technologies are becoming a key element in today's classroom instructional techniques. Regardless, the Professor must be held responsible for creating an instructional environment in which the technology actually supplements learning outcomes of the students. Almost all academic disciplines have found a niche for computer-based instruction in their respective professional domain. In many cases, it is viewed as an essential and integral part of the educational process. Educational institutions are committing substantial resources to the establishment of dedicated technology-based laboratories, so that they will be able to accommodate and fulfill students' desire to master certain of these specific skills. This type of technology-based instruction may raise some fundamental questions about the core competencies of the student learner. Some of the most important questions are : 1. Is the utilization of these fast high-powered computers and user-friendly software programs creating a totally non-challenging instructional environment for the student learner ? 2. Can technology itself all too easily overshadow the learning outcomes intended ? 3. Are the educational institutions simply training students how to use technology rather than educating them in the appropriate field ? 4. Are we still teaching content-driven courses and analysis oriented subject matter ? 5. Are these sophisticated modern era technologies contributing to a decline in the Critical Thinking Capabilities of the 21st century technology-savvy students ? The author tries to focus on technology as a tool and not on the technology itself. He further argues that students must demonstrate that they have the have the ability to think critically before they make an attempt to use technology in a chosen application-specific environment. The author further argues that training-based instruction has a very narrow focus that puts modern technology at the forefront of the learning enterprise system. The author promotes education-oriented strategies to provide the students with a broader perspective of the subject matter. The author is also of the opinion that students entering the workplace should clearly understand the context in which modern technologies are influencing the productive outcomes of the industrialized world. References : Marchese, T. J. (1987). Third Down, Ten Years to go. AAHE Bulletin, Vol. 40, pages 3-8. Marchese, T. J. (1994). Assessment, Quality and Undergraduate Improvement. Assessment Update, Vol. 6, No. 3. pages 1-14. Montagu, A. S. (2001). High-technology instruction: A framework for teaching computer-based technologies. Journal on Excellence in College Teaching, 12 (1), 109-128. Palomba, Catherine A. and Banta, Trudy W.(1999). Assessment Essentials :Planning, Implementing and Improving Assessment in Higher Education. San Francisco : Jossey Bass Publishers.

  10. Computational Skills for Biology Students

    ERIC Educational Resources Information Center

    Gross, Louis J.

    2008-01-01

    This interview with Distinguished Science Award recipient Louis J. Gross highlights essential computational skills for modern biology, including: (1) teaching concepts listed in the Math & Bio 2010 report; (2) illustrating to students that jobs today require quantitative skills; and (3) resources and materials that focus on computational skills.

  11. 76 FR 12308 - Modernizing the FCC Form 477 Data Program; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-07

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Parts 1, 20, and 43 [WCB: WC Docket Nos. 07-38, 09-190, 10-132, 11-10; FCC 11-14] Modernizing the FCC Form 477 Data Program; Correction AGENCY: Federal..., Deputy Manager. [FR Doc. 2011-5095 Filed 3-4-11; 8:45 am] BILLING CODE 6712-01-P ...

  12. Federal incentives for industrial modernization: Historical review and future opportunities

    NASA Technical Reports Server (NTRS)

    Coleman, Sandra C.; Batson, Robert G.

    1987-01-01

    Concerns over the aging of the U.S. aerospace industrial base led DOD to introduce first its Technology Modernization (Tech Mod) Program, and more recently the Industrial Modernization Incentive Program (IMIP). These incentives include productivity shared savings rewards, contractor investment protection to allow for amortization of plant and equipment, and subcontractor/vendor participation. The purpose here is to review DOD IMIP and to evaluate whether a similar program is feasible for NASA and other non-DOD agencies. The IMIP methodology is of interest to industrial engineers because it provides a structured, disciplined approach to identifying productivity improvement opportunities and documenting their expected benefit. However, it is shown that more research on predicting and validating cost avoidance is needed.

  13. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  14. Large Eddy Simulation of Engineering Flows: A Bill Reynolds Legacy.

    NASA Astrophysics Data System (ADS)

    Moin, Parviz

    2004-11-01

    The term, Large eddy simulation, LES, was coined by Bill Reynolds, thirty years ago when he and his colleagues pioneered the introduction of LES in the engineering community. Bill's legacy in LES features his insistence on having a proper mathematical definition of the large scale field independent of the numerical method used, and his vision for using numerical simulation output as data for research in turbulence physics and modeling, just as one would think of using experimental data. However, as an engineer, Bill was pre-dominantly interested in the predictive capability of computational fluid dynamics and in particular LES. In this talk I will present the state of the art in large eddy simulation of complex engineering flows. Most of this technology has been developed in the Department of Energy's ASCI Program at Stanford which was led by Bill in the last years of his distinguished career. At the core of this technology is a fully implicit non-dissipative LES code which uses unstructured grids with arbitrary elements. A hybrid Eulerian/ Largangian approach is used for multi-phase flows, and chemical reactions are introduced through dynamic equations for mixture fraction and reaction progress variable in conjunction with flamelet tables. The predictive capability of LES is demonstrated in several validation studies in flows with complex physics and complex geometry including flow in the combustor of a modern aircraft engine. LES in such a complex application is only possible through efficient utilization of modern parallel super-computers which was recognized and emphasized by Bill from the beginning. The presentation will include a brief mention of computer science efforts for efficient implementation of LES.

  15. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  16. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  17. Improvements in the efficiency of turboexpanders in cryogenic applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agahi, R.R.; Lin, M.C.; Ershaghi, B.

    1996-12-31

    Process designers have utilized turboexpanders in cryogenic processes because of their higher thermal efficiencies when compared with conventional refrigeration cycles. Process design and equipment performance have improved substantially through the utilization of modern technologies. Turboexpander manufacturers have also adopted Computational Fluid Dynamic Software, Computer Numerical Control Technology and Holography Techniques to further improve an already impressive turboexpander efficiency performance. In this paper, the authors explain the design process of the turboexpander utilizing modern technology. Two cases of turboexpanders processing helium (4.35{degrees}K) and hydrogen (56{degrees}K) will be presented.

  18. CFD Simulations in Support of Shuttle Orbiter Contingency Abort Aerodynamic Database Enhancement

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis E.; Prabhu, Dinesh; Wright, Michael; Davies, Carol; McDaniel, Ryan; Venkatapathy, E.; Wercinski, Paul; Gomez, R. J.

    2001-01-01

    Modern Computational Fluid Dynamics (CFD) techniques were used to compute aerodynamic forces and moments of the Space Shuttle Orbiter in specific portions of contingency abort trajectory space. The trajectory space covers a Mach number range of 3.5-15, an angle-of-attack range of 20deg-60deg, an altitude range of 100-190 kft, and several different settings of the control surfaces (elevons, body flap, and speed brake). Presented here are details of the methodology and comparisons of computed aerodynamic coefficients against the values in the current Orbiter Operational Aerodynamic Data Book (OADB). While approximately 40 cases have been computed, only a sampling of the results is provided here. The computed results, in general, are in good agreement with the OADB data (i.e., within the uncertainty bands) for almost all the cases. However, in a limited number of high angle-of-attack cases (at Mach 15), there are significant differences between the computed results, especially the vehicle pitching moment, and the OADB data. A preliminary analysis of the data from the CFD simulations at Mach 15 shows that these differences can be attributed to real-gas/Mach number effects. The aerodynamic coefficients and detailed surface pressure distributions of the present simulations are being used by the Shuttle Program in the evaluation of the capabilities of the Orbiter in contingency abort scenarios.

  19. Would Dissociative Recombination of DNA+ be a Possible Pathway of DNA Damage?

    NASA Astrophysics Data System (ADS)

    Kwon, H. C.; Chen, Z. P.; Strom, R. A.; Andrianarijaona, V. M.

    2015-05-01

    It is known that dissociative recombination (DR) is one of the very efficient processes of destruction of molecular cations into neutral particles. During the past few years, the focus of DR has been expanded from small inorganic molecules to macromolecular cation. We are probing the possibility of the DR of DNA+ after ionization of DNA, for example due to ionizing radiation. Therefore we are investigating the existence of autoionization states within nucleotide bases (Guanine, Adenine, Cytosine, and Thymine). Our results from computational analysis using the modern electronic structure program ORCA will be presented. Authors wish to give special thanks to Pacific Union College Student Senate for their financial support.

  20. Development of expert systems for analyzing electronic documents

    NASA Astrophysics Data System (ADS)

    Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.

    2018-05-01

    The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.

  1. Eigenspace techniques for active flutter suppression

    NASA Technical Reports Server (NTRS)

    Garrard, W. L.

    1982-01-01

    Mathematical models to be used in the control system design were developed. A computer program, which takes aerodynamic and structural data for the ARW-2 aircraft and converts these data into state space models suitable for use in modern control synthesis procedures, was developed. Reduced order models of inboard and outboard control surface actuator dynamics and a second order vertical wind gust model were developed. An analysis of the rigid body motion of the ARW-2 was conducted. The deletion of the aerodynamic lag states in the rigid body modes resulted in more accurate values for the eigenvalues associated with the plunge and pitch modes than were obtainable if the lag states were retained.

  2. F100 Multivariable Control Synthesis Program. Computer Implementation of the F100 Multivariable Control Algorithm

    NASA Technical Reports Server (NTRS)

    Soeder, J. F.

    1983-01-01

    As turbofan engines become more complex, the development of controls necessitate the use of multivariable control techniques. A control developed for the F100-PW-100(3) turbofan engine by using linear quadratic regulator theory and other modern multivariable control synthesis techniques is described. The assembly language implementation of this control on an SEL 810B minicomputer is described. This implementation was then evaluated by using a real-time hybrid simulation of the engine. The control software was modified to run with a real engine. These modifications, in the form of sensor and actuator failure checks and control executive sequencing, are discussed. Finally recommendations for control software implementations are presented.

  3. A browser-based event display for the CMS experiment at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hategan, M.; McCauley, T.; Nguyen, P.

    2012-01-01

    The line between native and web applications is becoming increasingly blurred as modern web browsers are becoming powerful platforms on which applications can be run. Such applications are trivial to install and are readily extensible and easy to use. In an educational setting, web applications permit a way to deploy deploy tools in a highly-restrictive computing environment. The I2U2 collaboration has developed a browser-based event display for viewing events in data collected and released to the public by the CMS experiment at the LHC. The application itself reads a JSON event format and uses the JavaScript 3D rendering engine pre3d.more » The only requirement is a modern browser using HTML5 canvas. The event display has been used by thousands of high school students in the context of programs organized by I2U2, QuarkNet, and IPPOG. This browser-based approach to display of events can have broader usage and impact for experts and public alike.« less

  4. Numerical analysis of the Magnus moment on a spin-stabilized projectile

    NASA Astrophysics Data System (ADS)

    Cremins, Michael; Rodebaugh, Gregory; Verhulst, Claire; Benson, Michael; van Poppel, Bret

    2016-11-01

    The Magnus moment is a result of an uneven pressure distribution that occurs when an object rotates in a crossflow. Unlike the Magnus force, which is often small for spin-stabilized projectiles, the Magnus moment can have a strong detrimental effect on flight stability. According to one source, most transonic and subsonic flight instabilities are caused by the Magnus moment [Modern Exterior Ballistics, McCoy], and yet simulations often fail to accurately predict the Magnus moment in the subsonic regime. In this study, we present hybrid Reynolds Averaged Navier Stokes (RANS) and Large Eddy Simulation (LES) predictions of the Magnus moment for a spin-stabilized projectile. Velocity, pressure, and Magnus moment predictions are presented for multiple Reynolds numbers and spin rates. We also consider the effect of a sting mount, which is commonly used when conducting flow measurements in a wind tunnel or water channel. Finally, we present the initial designs for a novel Magnetic Resonance Velocimetry (MRV) experiment to measure three-dimensional flow around a spinning projectile. This work was supported by the Department of Defense High Performance Computing Modernization Program (DoD HPCMP).

  5. Model-based phase-shifting interferometer

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian

    2015-10-01

    A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.

  6. Investigation of near-surface chemical, physical and mechanical properties of silicon carbide crystals and fibers modified by ion implantation

    NASA Astrophysics Data System (ADS)

    Spitznagel, J. A.; Wood, Susan

    1988-08-01

    The Software Engineering institute is a federally funded research and development center sponsored by the Department of Defense (DOD). It was chartered by the Undersecretary of Defense for Research and Engineering on June 15, 1984. The SEI was established and is operated by Carnegie Mellon University (CUM) under contract F19628-C-0003, which was competitively awarded on December 28, 1984, by the Air Force Electronic Systems Division. The mission of the SEI is to provide the means to bring the ablest minds and the most effective technology to bear on the rapid improvement of the quality of operational software in mission-critical computer systems; to accelerate the reduction to practice of modern software engineering techniques and methods; to promulgate the use of modern techniques and methods throughout the mission-critical systems community; and to establish standards of excellence for the practice of software engineering. This report provides a summary of the programs and projects, staff, facilities, and service accomplishments of the Software Engineering Institute during 1987.

  7. Modernization and new technologies: Coping with the information explosion

    NASA Technical Reports Server (NTRS)

    Blados, Walter R.; Cotter, Gladys A.

    1993-01-01

    Information has become a valuable and strategic resource in all societies and economies. Scientific and technical information is especially important in developing and maintaining a strong national science and technology base. The expanding use of information technology, the growth of interdisciplinary research, and an increase in international collaboration are changing characteristics of information. This modernization effort applies new technology to current processes to provide near-term benefits to the user. At the same time, we are developing a long-term modernization strategy designed to transition the program to a multimedia, global 'library without walls'. Notwithstanding this modernization program, it is recogized that no one information center can hope to collect all the relevant data. We see information and information systems changing and becoming more international in scope. We are finding that many nations are expending resources on national systems which duplicate each other. At the same time that this duplication exists, many useful sources of aerospace information are not being collected to cover expanded sources of information. This paper reviews the NASA modernization program and raises for consideration new possibilities for unification of the various aerospace database efforts toward a cooperative international aerospace database initiative, one that can optimize the cost/benefit equation for all participants.

  8. A Framework for Understanding Physics Students' Computational Modeling Practices

    NASA Astrophysics Data System (ADS)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by their existing physics content knowledge, particularly their knowledge of analytic procedures. While this existing knowledge was often applied in inappropriate circumstances, the students were still able to display a considerable amount of understanding of the physics content and of analytic solution procedures. These observations could not be adequately accommodated by the existing literature of programming comprehension. In extending the resource framework to the task of computational modeling, I model students' practices in terms of three important elements. First, a knowledge base includes re- sources for understanding physics, math, and programming structures. Second, a mechanism for monitoring and control describes students' expectations as being directed towards numerical, analytic, qualitative or rote solution approaches and which can be influenced by the problem representation. Third, a set of solution approaches---many of which were identified in this study---describe what aspects of the knowledge base students use and how they use that knowledge to enact their expectations. This framework allows us as researchers to track student discussions and pinpoint the source of difficulties. This work opens up many avenues of potential research. First, this framework gives researchers a vocabulary for extending Resource Theory to other domains of instruction, such as modeling how physics students use graphs. Second, this framework can be used as the basis for modeling expert physicists' programming practices. Important instructional implications also follow from this research. Namely, as we broaden the use of computational modeling in the physics classroom, our instructional practices should focus on helping students understand the step-by-step nature of programming in contrast to the already salient analytic procedures.

  9. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.

  10. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.

  11. Evolution of computational chemistry, the "launch pad" to scientific computational models: The early days from a personal account, the present status from the TACC-2012 congress, and eventual future applications from the global simulation approach

    NASA Astrophysics Data System (ADS)

    Clementi, Enrico

    2012-06-01

    This is the introductory chapter to the AIP Proceedings volume "Theory and Applications of Computational Chemistry: The First Decade of the Second Millennium" where we discuss the evolution of "computational chemistry". Very early variational computational chemistry developments are reported in Sections 1 to 7, and 11, 12 by recalling some of the computational chemistry contributions by the author and his collaborators (from late 1950 to mid 1990); perturbation techniques are not considered in this already extended work. Present day's computational chemistry is partly considered in Sections 8 to 10 where more recent studies by the author and his collaborators are discussed, including the Hartree-Fock-Heitler-London method; a more general discussion on present day computational chemistry is presented in Section 14. The following chapters of this AIP volume provide a view of modern computational chemistry. Future computational chemistry developments can be extrapolated from the chapters of this AIP volume; further, in Sections 13 and 15 present an overall analysis on computational chemistry, obtained from the Global Simulation approach, by considering the evolution of scientific knowledge confronted with the opportunities offered by modern computers.

  12. The WSMR Timing System: Toward New Horizons

    NASA Technical Reports Server (NTRS)

    Gilbert, William A.; Stimets, Bob

    1996-01-01

    In 1991, White Sands Missile Range (WSMR) initiated a modernization program for its range timing system. The main focus of this modernization program was to develop a system that was highly accurate, easy to maintain, and portable. The logical decision at the time was to develop a system based solely on Global Positioning System (GPS) technology. Since that time, wsmr has changed its philosophy on how GPS would be utilized for the timing system. This paper will describe WSMR's initial modernization plans for its range timing system and how certain events have led to a modification of these plans.

  13. A Prospectus for the Future Development of a Speech Lab: Hypertext Applications.

    ERIC Educational Resources Information Center

    Berube, David M.

    This paper presents a plan for the next generation of speech laboratories which integrates technologies of modern communication in order to improve and modernize the instructional process. The paper first examines the application of intermediate technologies including audio-video recording and playback, computer assisted instruction and testing…

  14. MODERN LINGUISTICS, ITS DEVELOPMENT AND SCOPE.

    ERIC Educational Resources Information Center

    LEVIN, SAMUEL R.

    THE DEVELOPMENT OF MODERN LINGUISTICS STARTED WITH JONES' DISCOVERY IN 1786 THAT SANSKRIT IS CLOSELY RELATED TO THE CLASSICAL, GERMANIC, AND CELTIC LANGUAGES, AND HAS ADVANCED TO INCLUDE THE APPLICATION OF COMPUTERS IN LANGUAGE ANALYSIS. THE HIGHLIGHTS OF LINGUISTIC RESEARCH HAVE BEEN DE SAUSSURE'S DISTINCTION BETWEEN THE DIACHRONIC AND THE…

  15. Fulbright-Hays Summer Seminars Abroad Program 1989. Egypt: Transition to the Modern World.

    ERIC Educational Resources Information Center

    Office of International Education (ED), Washington, DC.

    This document consists of four papers on various aspects of development in Egypt prepared by participants in the Fulbright-Hays Seminars Abroad Program in Egypt in 1989. Four of the papers are descriptive, one is a lesson plan. The papers included are: (1) "Egypt: Transition to Modern Times" (Katherine Jensen) focuses on the role of…

  16. Europe Report, Science and Technology

    DTIC Science & Technology

    1986-09-30

    to certain basic products of the food industry such as beer, vinegar , 51 spirits, starches, etc. It is also assumed that modern biotechnologies...Czechoslovak food production. This is also the objective of innovative and modernizing programs in the fermented food sectors. The program for the...cattle and improves fodder utilization, assuming balanced doses of fodder. The development of fermentation techniques of production will occur within

  17. Quantitative Investigation of the Technologies That Support Cloud Computing

    ERIC Educational Resources Information Center

    Hu, Wenjin

    2014-01-01

    Cloud computing is dramatically shaping modern IT infrastructure. It virtualizes computing resources, provides elastic scalability, serves as a pay-as-you-use utility, simplifies the IT administrators' daily tasks, enhances the mobility and collaboration of data, and increases user productivity. We focus on providing generalized black-box…

  18. Coordinating Technological Resources in a Non-Technical Profession: The Administrative Computer User Group.

    ERIC Educational Resources Information Center

    Rollo, J. Michael; Marmarchev, Helen L.

    1999-01-01

    The explosion of computer applications in the modern workplace has required student affairs professionals to keep pace with technological advances for office productivity. This article recommends establishing an administrative computer user groups, utilizing coordinated web site development, and enhancing working relationships as ways of dealing…

  19. Computer-Based Molecular Modelling: Finnish School Teachers' Experiences and Views

    ERIC Educational Resources Information Center

    Aksela, Maija; Lundell, Jan

    2008-01-01

    Modern computer-based molecular modelling opens up new possibilities for chemistry teaching at different levels. This article presents a case study seeking insight into Finnish school teachers' use of computer-based molecular modelling in teaching chemistry, into the different working and teaching methods used, and their opinions about necessary…

  20. Introduction to the theory of machines and languages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weidhaas, P. P.

    1976-04-01

    This text is intended to be an elementary ''guided tour'' through some basic concepts of modern computer science. Various models of computing machines and formal languages are studied in detail. Discussions center around questions such as, ''What is the scope of problems that can or cannot be solved by computers.''

  1. Cognitive programs: software for attention's executive

    PubMed Central

    Tsotsos, John K.; Kruijne, Wouter

    2014-01-01

    What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention. PMID:25505430

  2. Accelerating Wright–Fisher Forward Simulations on the Graphics Processing Unit

    PubMed Central

    Lawrie, David S.

    2017-01-01

    Forward Wright–Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright–Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called “embarrassingly parallel,” consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. PMID:28768689

  3. [Data collection in anesthesia. Experiences with the inauguration of a new information system].

    PubMed

    Zbinden, A M; Rothenbühler, H; Häberli, B

    1997-06-01

    In many institutions information systems are used to process off-line anaesthesia data for invoices, statistical purposes, and quality assurance. Information systems are also increasingly being used to improve process control in order to reduce costs. Most of today's systems were created when information technology and working processes in anaesthesia were very different from those in use today. Thus, many institutions must now replace their computer systems but are probably not aware of how complex this change will be. Modern information systems mostly use client-server architecture and relational data bases. Substituting an old system with a new one is frequently a greater task than designing a system from scratch. This article gives the conclusions drawn from the experience obtained when a large departmental computer system is redesigned in an university hospital. The new system was based on a client-server architecture and was developed by an external company without preceding conceptual analysis. Modules for patient, anaesthesia, surgical, and pain-service data were included. Data were analysed using a separate statistical package (RS/1 from Bolt Beranek), taking advantage of its powerful precompiled procedures. Development and introduction of the new system took much more time and effort than expected despite the use of modern software tools. Introduction of the new program required intensive user training despite the choice of modem graphic screen layouts. Automatic data-reading systems could not be used, as too many faults occurred and the effort for the user was too high. However, after the initial problems were solved the system turned out to be a powerful tool for quality control (both process and outcome quality), billing, and scheduling. The statistical analysis of the data resulted in meaningful and relevant conclusions. Before creating a new information system, the working processes have to be analysed and, if possible, made more efficient; a detailed programme specification must then be made. A servicing and maintenance contract should be drawn up before the order is given to a company. Time periods of equal duration have to be scheduled for defining, writing, testing and introducing the program. Modern client-server systems with relational data bases are by no means simpler to establish and maintain than previous mainframe systems with hierarchical data bases, and thus, experienced computer specialists need to be close at hand. We recommend collecting data only once for both statistics and quality control. To verify data quality, a system of random spot-sampling has to be established. Despite the large investments needed to build up such a system, we consider it a powerful tool for helping to solve the difficult daily problems of managing a surgical and anaesthesia unit.

  4. Hydrodynamic Simulations of Protoplanetary Disks with GIZMO

    NASA Astrophysics Data System (ADS)

    Rice, Malena; Laughlin, Greg

    2018-01-01

    Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.

  5. Mark 4A antenna control system data handling architecture study

    NASA Technical Reports Server (NTRS)

    Briggs, H. C.; Eldred, D. B.

    1991-01-01

    A high-level review was conducted to provide an analysis of the existing architecture used to handle data and implement control algorithms for NASA's Deep Space Network (DSN) antennas and to make system-level recommendations for improving this architecture so that the DSN antennas can support the ever-tightening requirements of the next decade and beyond. It was found that the existing system is seriously overloaded, with processor utilization approaching 100 percent. A number of factors contribute to this overloading, including dated hardware, inefficient software, and a message-passing strategy that depends on serial connections between machines. At the same time, the system has shortcomings and idiosyncrasies that require extensive human intervention. A custom operating system kernel and an obscure programming language exacerbate the problems and should be modernized. A new architecture is presented that addresses these and other issues. Key features of the new architecture include a simplified message passing hierarchy that utilizes a high-speed local area network, redesign of particular processing function algorithms, consolidation of functions, and implementation of the architecture in modern hardware and software using mainstream computer languages and operating systems. The system would also allow incremental hardware improvements as better and faster hardware for such systems becomes available, and costs could potentially be low enough that redundancy would be provided economically. Such a system could support DSN requirements for the foreseeable future, though thorough consideration must be given to hard computational requirements, porting existing software functionality to the new system, and issues of fault tolerance and recovery.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germain, Shawn

    Nuclear Power Plant (NPP) refueling outages create some of the most challenging activities the utilities face in both tracking and coordinating thousands of activities in a short period of time. Other challenges, including nuclear safety concerns arising from atypical system configurations and resource allocation issues, can create delays and schedule overruns, driving up outage costs. Today the majority of the outage communication is done using processes that do not take advantage of advances in modern technologies that enable enhanced communication, collaboration and information sharing. Some of the common practices include: runners that deliver paper-based requests for approval, radios, telephones, desktopmore » computers, daily schedule printouts, and static whiteboards that are used to display information. Many gains have been made to reduce the challenges facing outage coordinators; however; new opportunities can be realized by utilizing modern technological advancements in communication and information tools that can enhance the collective situational awareness of plant personnel leading to improved decision-making. Ongoing research as part of the Light Water Reactor Sustainability Program (LWRS) has been targeting NPP outage improvement. As part of this research, various applications of collaborative software have been demonstrated through pilot project utility partnerships. Collaboration software can be utilized as part of the larger concept of Computer-Supported Cooperative Work (CSCW). Collaborative software can be used for emergent issue resolution, Outage Control Center (OCC) displays, and schedule monitoring. Use of collaboration software enables outage staff and subject matter experts (SMEs) to view and update critical outage information from any location on site or off.« less

  7. Programmable and autonomous computing machine made of biomolecules

    PubMed Central

    Benenson, Yaakov; Paz-Elizur, Tamar; Adar, Rivka; Keinan, Ehud; Livneh, Zvi; Shapiro, Ehud

    2013-01-01

    Devices that convert information from one form into another according to a definite procedure are known as automata. One such hypothetical device is the universal Turing machine1, which stimulated work leading to the development of modern computers. The Turing machine and its special cases2, including finite automata3, operate by scanning a data tape, whose striking analogy to information-encoding biopolymers inspired several designs for molecular DNA computers4–8. Laboratory-scale computing using DNA and human-assisted protocols has been demonstrated9–15, but the realization of computing devices operating autonomously on the molecular scale remains rare16–20. Here we describe a programmable finite automaton comprising DNA and DNA-manipulating enzymes that solves computational problems autonomously. The automaton’s hardware consists of a restriction nuclease and ligase, the software and input are encoded by double-stranded DNA, and programming amounts to choosing appropriate software molecules. Upon mixing solutions containing these components, the automaton processes the input molecule via a cascade of restriction, hybridization and ligation cycles, producing a detectable output molecule that encodes the automaton’s final state, and thus the computational result. In our implementation 1012 automata sharing the same software run independently and in parallel on inputs (which could, in principle, be distinct) in 120 μl solution at room temperature at a combined rate of 109 transitions per second with a transition fidelity greater than 99.8%, consuming less than 10−10 W. PMID:11719800

  8. Mechanical Analysis of W78/88-1 Life Extension Program Warhead Design Options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Nathan

    2014-09-01

    Life Extension Program (LEP) is a program to repair/replace components of nuclear weapons to ensure the ability to meet military requirements. The W78/88-1 LEP encompasses the modernization of two major nuclear weapon reentry systems into an interoperable warhead. Several design concepts exist to provide different options for robust safety and security themes, maximum non-nuclear commonality, and cost. Simulation is one capability used to evaluate the mechanical performance of the designs in various operational environments, plan for system and component qualification efforts, and provide insight into the survivability of the warhead in environments that are not currently testable. The simulation effortsmore » use several Sandia-developed tools through the Advanced Simulation and Computing program, including Cubit for mesh generation, the DART Model Manager, SIERRA codes running on the HPC TLCC2 platforms, DAKOTA, and ParaView. Several programmatic objectives were met using the simulation capability including: (1) providing early environmental specification estimates that may be used by component designers to understand the severity of the loads their components will need to survive, (2) providing guidance for load levels and configurations for subassembly tests intended to represent operational environments, and (3) recommending design options including modified geometry and material properties. These objectives were accomplished through regular interactions with component, system, and test engineers while using the laboratory's computational infrastructure to effectively perform ensembles of simulations. Because NNSA has decided to defer the LEP program, simulation results are being documented and models are being archived for future reference. However, some advanced and exploratory efforts will continue to mature key technologies, using the results from these and ongoing simulations for design insights, test planning, and model validation.« less

  9. Evaluation of Turbulence-Model Performance in Jet Flows

    NASA Technical Reports Server (NTRS)

    Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.

    2001-01-01

    The importance of reducing jet noise in both commercial and military aircraft applications has made jet acoustics a significant area of research. A technique for jet noise prediction commonly employed in practice is the MGB approach, based on the Lighthill acoustic analogy. This technique requires as aerodynamic input mean flow quantities and turbulence quantities like the kinetic energy and the dissipation. The purpose of the present paper is to assess existing capabilities for predicting these aerodynamic inputs. Two modern Navier-Stokes flow solvers, coupled with several modern turbulence models, are evaluated by comparison with experiment for their ability to predict mean flow properties in a supersonic jet plume. Potential weaknesses are identified for further investigation. Another comparison with similar intent is discussed by Barber et al. The ultimate goal of this research is to develop a reliable flow solver applicable to the low-noise, propulsion-efficient, nozzle exhaust systems being developed in NASA focused programs. These programs address a broad range of complex nozzle geometries operating in high temperature, compressible, flows. Seiner et al. previously discussed the jet configuration examined here. This convergent-divergent nozzle with an exit diameter of 3.6 inches was designed for an exhaust Mach number of 2.0 and a total temperature of 1680 F. The acoustic and aerodynamic data reported by Seiner et al. covered a range of jet total temperatures from 104 F to 2200 F at the fully-expanded nozzle pressure ratio. The aerodynamic data included centerline mean velocity and total temperature profiles. Computations were performed independently with two computational fluid dynamics (CFD) codes, ISAAC and PAB3D. Turbulence models employed include the k-epsilon model, the Gatski-Speziale algebraic-stress model and the Girimaji model, with and without the Sarkar compressibility correction. Centerline values of mean velocity and mean temperature are compared with experimental data.

  10. Petascale Many Body Methods for Complex Correlated Systems

    NASA Astrophysics Data System (ADS)

    Pruschke, Thomas

    2012-02-01

    Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.

  11. Genometa--a fast and accurate classifier for short metagenomic shotgun reads.

    PubMed

    Davenport, Colin F; Neugebauer, Jens; Beckmann, Nils; Friedrich, Benedikt; Kameri, Burim; Kokott, Svea; Paetow, Malte; Siekmann, Björn; Wieding-Drewes, Matthias; Wienhöfer, Markus; Wolf, Stefan; Tümmler, Burkhard; Ahlers, Volker; Sprengel, Frauke

    2012-01-01

    Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species) and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer. The Genometa program, a step by step tutorial and Java source code are freely available from http://genomics1.mh-hannover.de/genometa/ and on http://code.google.com/p/genometa/. This program has been tested on Ubuntu Linux and Windows XP/7.

  12. Chinas Future SSBN Command and Control Structure

    DTIC Science & Technology

    2016-11-01

    ndupress.ndu.edu SF No. 299 1 China’s ongoing modernization program is transforming the country’s nuclear arsenal from one consisting of a few...also be mediated by bureaucratic poli- tics, including the time-honored tradition of interservice rivalry. The emergent SSBN fleet may represent a prime...represent a reliable source of resources. China has dedicated substantial resources to undertaking a nuclear modernization program designed to ensure

  13. Exploiting Vector and Multicore Parallelsim for Recursive, Data- and Task-Parallel Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Bin; Krishnamoorthy, Sriram; Agrawal, Kunal

    Modern hardware contains parallel execution resources that are well-suited for data-parallelism-vector units-and task parallelism-multicores. However, most work on parallel scheduling focuses on one type of hardware or the other. In this work, we present a scheduling framework that allows for a unified treatment of task- and data-parallelism. Our key insight is an abstraction, task blocks, that uniformly handles data-parallel iterations and task-parallel tasks, allowing them to be scheduled on vector units or executed independently as multicores. Our framework allows us to define schedulers that can dynamically select between executing task- blocks on vector units or multicores. We show that thesemore » schedulers are asymptotically optimal, and deliver the maximum amount of parallelism available in computation trees. To evaluate our schedulers, we develop program transformations that can convert mixed data- and task-parallel pro- grams into task block-based programs. Using a prototype instantiation of our scheduling framework, we show that, on an 8-core system, we can simultaneously exploit vector and multicore parallelism to achieve 14×-108× speedup over sequential baselines.« less

  14. Advancing Creative Visual Thinking with Constructive Function-Based Modelling

    ERIC Educational Resources Information Center

    Pasko, Alexander; Adzhiev, Valery; Malikova, Evgeniya; Pilyugin, Victor

    2013-01-01

    Modern education technologies are destined to reflect the realities of a modern digital age. The juxtaposition of real and synthetic (computer-generated) worlds as well as a greater emphasis on visual dimension are especially important characteristics that have to be taken into account in learning and teaching. We describe the ways in which an…

  15. Brønsted acidity of protic ionic liquids: a modern ab initio valence bond theory perspective.

    PubMed

    Patil, Amol Baliram; Mahadeo Bhanage, Bhalchandra

    2016-09-21

    Room temperature ionic liquids (ILs), especially protic ionic liquids (PILs), are used in many areas of the chemical sciences. Ionicity, the extent of proton transfer, is a key parameter which determines many physicochemical properties and in turn the suitability of PILs for various applications. The spectrum of computational chemistry techniques applied to investigate ionic liquids includes classical molecular dynamics, Monte Carlo simulations, ab initio molecular dynamics, Density Functional Theory (DFT), CCSD(t) etc. At the other end of the spectrum is another computational approach: modern ab initio Valence Bond Theory (VBT). VBT differs from molecular orbital theory based methods in the expression of the molecular wave function. The molecular wave function in the valence bond ansatz is expressed as a linear combination of valence bond structures. These structures include covalent and ionic structures explicitly. Modern ab initio valence bond theory calculations of representative primary and tertiary ammonium protic ionic liquids indicate that modern ab initio valence bond theory can be employed to assess the acidity and ionicity of protic ionic liquids a priori.

  16. An anatomy of industrial robots and their controls

    NASA Astrophysics Data System (ADS)

    Luh, J. Y. S.

    1983-02-01

    The modernization of manufacturing facilities by means of automation represents an approach for increasing productivity in industry. The three existing types of automation are related to continuous process controls, the use of transfer conveyor methods, and the employment of programmable automation for the low-volume batch production of discrete parts. The industrial robots, which are defined as computer controlled mechanics manipulators, belong to the area of programmable automation. Typically, the robots perform tasks of arc welding, paint spraying, or foundary operation. One may assign a robot to perform a variety of job assignments simply by changing the appropriate computer program. The present investigation is concerned with an evaluation of the potential of the robot on the basis of its basic structure and controls. It is found that robots function well in limited areas of industry. If the range of tasks which robots can perform is to be expanded, it is necessary to provide multiple-task sensors, or special tooling, or even automatic tooling.

  17. Molecular structure input on the web.

    PubMed

    Ertl, Peter

    2010-02-02

    A molecule editor, that is program for input and editing of molecules, is an indispensable part of every cheminformatics or molecular processing system. This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web. Scientific computing is now moving more and more in the direction of web services and cloud computing, with servers scattered all around the Internet. Thus a web browser has become the universal scientific user interface, and a tool to edit molecules directly within the web browser is essential.The review covers a history of web-based structure input, starting with simple text entry boxes and early molecule editors based on clickable maps, before moving to the current situation dominated by Java applets. One typical example - the popular JME Molecule Editor - will be described in more detail. Modern Ajax server-side molecule editors are also presented. And finally, the possible future direction of web-based molecule editing, based on technologies like JavaScript and Flash, is discussed.

  18. National Toxicology Program

    MedlinePlus

    ... develops and applies tools of modern toxicology and molecular biology to identify substances in the environment that may ... application of new technologies for modern toxicology and molecular biology. A world leader in toxicology research, NTP has ...

  19. Factors Affecting Teachers' Adoption of Educational Computer Games: A Case Study

    ERIC Educational Resources Information Center

    Kebritchi, Mansureh

    2010-01-01

    Even though computer games hold considerable potential for engaging and facilitating learning among today's children, the adoption of modern educational computer games is still meeting significant resistance in K-12 education. The purpose of this paper is to inform educators and instructional designers on factors affecting teachers' adoption of…

  20. Let Me Share a Secret with You! Teaching with Computers.

    ERIC Educational Resources Information Center

    de Vasconcelos, Maria

    The author describes her experiences teaching a computer-enhanced Modern Poetry course. The author argues that using computers enhances the concept of the classroom as learning community. It was the author's experience that students' postings on the discussion board created an atmosphere that encouraged student involvement, as opposed to the…

  1. The State of the Art in Information Handling. Operation PEP/Executive Information Systems.

    ERIC Educational Resources Information Center

    Summers, J. K.; Sullivan, J. E.

    This document explains recent developments in computer science and information systems of interest to the educational manager. A brief history of computers is included, together with an examination of modern computers' capabilities. Various features of card, tape, and disk information storage systems are presented. The importance of time-sharing…

  2. Computer-Mediated Communication as an Autonomy-Enhancement Tool for Advanced Learners of English

    ERIC Educational Resources Information Center

    Wach, Aleksandra

    2012-01-01

    This article examines the relevance of modern technology for the development of learner autonomy in the process of learning English as a foreign language. Computer-assisted language learning and computer-mediated communication (CMC) appear to be particularly conducive to fostering autonomous learning, as they naturally incorporate many elements of…

  3. Redesigning the Quantum Mechanics Curriculum to Incorporate Problem Solving Using a Computer Algebra System

    NASA Astrophysics Data System (ADS)

    Roussel, Marc R.

    1999-10-01

    One of the traditional obstacles to learning quantum mechanics is the relatively high level of mathematical proficiency required to solve even routine problems. Modern computer algebra systems are now sufficiently reliable that they can be used as mathematical assistants to alleviate this difficulty. In the quantum mechanics course at the University of Lethbridge, the traditional three lecture hours per week have been replaced by two lecture hours and a one-hour computer-aided problem solving session using a computer algebra system (Maple). While this somewhat reduces the number of topics that can be tackled during the term, students have a better opportunity to familiarize themselves with the underlying theory with this course design. Maple is also available to students during examinations. The use of a computer algebra system expands the class of feasible problems during a time-limited exercise such as a midterm or final examination. A modern computer algebra system is a complex piece of software, so some time needs to be devoted to teaching the students its proper use. However, the advantages to the teaching of quantum mechanics appear to outweigh the disadvantages.

  4. A view of Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.

  5. Advanced Methodologies for NASA Science Missions

    NASA Astrophysics Data System (ADS)

    Hurlburt, N. E.; Feigelson, E.; Mentzel, C.

    2017-12-01

    Most of NASA's commitment to computational space science involves the organization and processing of Big Data from space-based satellites, and the calculations of advanced physical models based on these datasets. But considerable thought is also needed on what computations are needed. The science questions addressed by space data are so diverse and complex that traditional analysis procedures are often inadequate. The knowledge and skills of the statistician, applied mathematician, and algorithmic computer scientist must be incorporated into programs that currently emphasize engineering and physical science. NASA's culture and administrative mechanisms take full cognizance that major advances in space science are driven by improvements in instrumentation. But it is less well recognized that new instruments and science questions give rise to new challenges in the treatment of satellite data after it is telemetered to the ground. These issues might be divided into two stages: data reduction through software pipelines developed within NASA mission centers; and science analysis that is performed by hundreds of space scientists dispersed through NASA, U.S. universities, and abroad. Both stages benefit from the latest statistical and computational methods; in some cases, the science result is completely inaccessible using traditional procedures. This paper will review the current state of NASA and present example applications using modern methodologies.

  6. Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis.

    PubMed

    Bonham-Carter, Oliver; Steele, Joe; Bastola, Dhundy

    2014-11-01

    Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base-base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel-Ziv techniques from data compression. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Technology and the Modern Library.

    ERIC Educational Resources Information Center

    Boss, Richard W.

    1984-01-01

    Overview of the impact of information technology on libraries highlights turnkey vendors, bibliographic utilities, commercial suppliers of records, state and regional networks, computer-to-computer linkages, remote database searching, terminals and microcomputers, building local databases, delivery of information, digital telefacsimile,…

  8. Virtual reality computer simulation.

    PubMed

    Grantcharov, T P; Rosenberg, J; Pahle, E; Funch-Jensen, P

    2001-03-01

    Objective assessment of psychomotor skills should be an essential component of a modern surgical training program. There are computer systems that can be used for this purpose, but their wide application is not yet generally accepted. The aim of this study was to validate the role of virtual reality computer simulation as a method for evaluating surgical laparoscopic skills. The study included 14 surgical residents. On day 1, they performed two runs of all six tasks on the Minimally Invasive Surgical Trainer, Virtual Reality (MIST VR). On day 2, they performed a laparoscopic cholecystectomy on living pigs; afterward, they were tested again on the MIST VR. A group of experienced surgeons evaluated the trainees' performance on the animal operation, giving scores for total performance error and economy of motion. During the tasks on the MIST VR, errors and noneconomy of movements for the left and right hand were also recorded. There were significant correlations between error scores in vivo and three of the six in vitro tasks (p < 0.05). In vivo economy scores correlated significantly with non-economy right-hand scores for five of the six tasks and with non-economy left-hand scores for one of the six tasks (p < 0.05). In this study, laparoscopic performance in the animal model correlated significantly with performance on the computer simulator. Thus, the computer model seems to be a promising objective method for the assessment of laparoscopic psychomotor skills.

  9. Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline

    PubMed Central

    Dinov, Ivo; Lozev, Kamen; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Zamanyan, Alen; Chakrapani, Shruthi; Van Horn, John; Parker, D. Stott; Magsipoc, Rico; Leung, Kelvin; Gutman, Boris; Woods, Roger; Toga, Arthur

    2010-01-01

    Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu. PMID:20927408

  10. Height modernization program and subsidence study in northern Ohio.

    DOT National Transportation Integrated Search

    2013-11-01

    This study is an initiative focused on establishing accurate, reliable heights using Global Navigation Satellite System (GNSS) technology in conjunction with traditional leveling, gravity, and modern remote sensing information. The traditional method...

  11. BiteScis: Connecting K-12 teachers with science graduate students to produce lesson plans on modern science research

    NASA Astrophysics Data System (ADS)

    Battersby, Cara

    2016-01-01

    Many students graduate high school having never learned about the process and people behind modern science research. The BiteScis program addresses this gap by providing easily implemented lesson plans that incorporate the whos, whats, and hows of today's scienctific discoveries. We bring together practicing scientists (motivated graduate students from the selective communicating science conference, ComSciCon) with K-12 science teachers to produce, review, and disseminate K-12 lesson plans based on modern science research. These lesson plans vary in topic from environmental science to neurobiology to astrophysics, and involve a range of activities from laboratory exercises to art projects, debates, or group discussion. An integral component of the program is a series of short, "bite-size" articles on modern science research written for K-12 students. The "bite-size" articles and lesson plans will be made freely available online in an easily searchable web interface that includes association with a variety of curriculum standards. This ongoing program is in its first year with about 15 lesson plans produced to date.

  12. Demand generation activities and modern contraceptive use in urban areas of four countries: a longitudinal evaluation.

    PubMed

    Speizer, Ilene S; Corroon, Meghan; Calhoun, Lisa; Lance, Peter; Montana, Livia; Nanda, Priya; Guilkey, David

    2014-11-06

    Family planning is crucial for preventing unintended pregnancies and for improving maternal and child health and well-being. In urban areas where there are large inequities in family planning use, particularly among the urban poor, programs are needed to increase access to and use of contraception among those most in need. This paper presents the midterm evaluation findings of the Urban Reproductive Health Initiative (Urban RH Initiative) programs, funded by the Bill & Melinda Gates Foundation, that are being implemented in 4 countries: India (Uttar Pradesh), Kenya, Nigeria, and Senegal. Between 2010 and 2013, the Measurement, Learning & Evaluation (MLE) project collected baseline and 2-year longitudinal follow-up data from women in target study cities to examine the role of demand generation activities undertaken as part of the Urban RH Initiative programs. Evaluation results demonstrate that, in each country where it was measured, outreach by community health or family planning workers as well as local radio programs were significantly associated with increased use of modern contraceptive methods. In addition, in India and Nigeria, television programs had a significant effect on modern contraceptive use, and in Kenya and Nigeria, the program slogans and materials that were blanketed across the cities (eg, leaflets/brochures distributed at health clinics and the program logo placed on all forms of materials, from market umbrellas to health facility signs and television programs) were also significantly associated with modern method use. Our results show that targeted, multilevel demand generation activities can make an important contribution to increasing modern contraceptive use in urban areas and could impact Millennium Development Goals for improved maternal and child health and access to reproductive health for all. © Speizer et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly cited. To view a copy of the license, visit http://creativecommons.org/licenses/by/3.0/. When linking to this article, please use the following permanent link: http://dx.doi.org/10.9745/GHSP-D-14-00109.

  13. Modern Gemini-Approach to Technology Development for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    White, Harold

    2010-01-01

    In NASA's plan to put men on the moon, there were three sequential programs: Mercury, Gemini, and Apollo. The Gemini program was used to develop and integrate the technologies that would be necessary for the Apollo program to successfully put men on the moon. We would like to present an analogous modern approach that leverages legacy ISS hardware designs, and integrates developing new technologies into a flexible architecture This new architecture is scalable, sustainable, and can be used to establish human exploration infrastructure beyond low earth orbit and into deep space.

  14. TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series

    NASA Astrophysics Data System (ADS)

    Czerwinski, Fabian; Oddershede, Lene B.

    2011-02-01

    With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours

  15. III International Conference on Laser and Plasma Researches and Technologies

    NASA Astrophysics Data System (ADS)

    2017-12-01

    A.P. Kuznetsov and S.V. Genisaretskaya III Conference on Plasma and Laser Research and Technologies took place on January 24th until January 27th, 2017 at the National Research Nuclear University "MEPhI" (NRNU MEPhI). The Conference was organized by the Institute for Laser and Plasma Technologies and was supported by the Competitiveness Program of NRNU MEPhI. The conference program consisted of nine sections: • Laser physics and its application • Plasma physics and its application • Laser, plasma and radiation technologies in industry • Physics of extreme light fields • Controlled thermonuclear fusion • Modern problems of theoretical physics • Challenges in physics of solid state, functional materials and nanosystems • Particle accelerators and radiation technologies • Modern trends of quantum metrology. The conference is based on scientific fields as follows: • Laser, plasma and radiation technologies in industry, energetic, medicine; • Photonics, quantum metrology, optical information processing; • New functional materials, metamaterials, “smart” alloys and quantum systems; • Ultrahigh optical fields, high-power lasers, Mega Science facilities; • High-temperature plasma physics, environmentally-friendly energetic based on controlled thermonuclear fusion; • Spectroscopic synchrotron, neutron, laser research methods, quantum mechanical calculation and computer modelling of condensed media and nanostructures. More than 250 specialists took part in the Conference. They represented leading Russian scientific research centers and universities (National Research Centre "Kurchatov Institute", A.M. Prokhorov General Physics Institute, P.N. Lebedev Physical Institute, Troitsk Institute for Innovation and Fusion Research, Joint Institute for Nuclear Research, Moscow Institute of Physics and Tecnology and others) and leading scientific centers and universities from Germany, France, USA, Canada, Japan. We would like to thank heartily all of the speakers, participants, organizing and program committee members for their contribution to the conference.

  16. Twenty-Five Year Site Plan FY2013 - FY2037

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, William H.

    2012-07-12

    Los Alamos National Laboratory (the Laboratory) is the nation's premier national security science laboratory. Its mission is to develop and apply science and technology to ensure the safety, security, and reliability of the United States (U.S.) nuclear stockpile; reduce the threat of weapons of mass destruction, proliferation, and terrorism; and solve national problems in defense, energy, and the environment. The fiscal year (FY) 2013-2037 Twenty-Five Year Site Plan (TYSP) is a vital component for planning to meet the National Nuclear Security Administration (NNSA) commitment to ensure the U.S. has a safe, secure, and reliable nuclear deterrent. The Laboratory also usesmore » the TYSP as an integrated planning tool to guide development of an efficient and responsive infrastructure that effectively supports the Laboratory's missions and workforce. Emphasizing the Laboratory's core capabilities, this TYSP reflects the Laboratory's role as a prominent contributor to NNSA missions through its programs and campaigns. The Laboratory is aligned with Nuclear Security Enterprise (NSE) modernization activities outlined in the NNSA Strategic Plan (May 2011) which include: (1) ensuring laboratory plutonium space effectively supports pit manufacturing and enterprise-wide special nuclear materials consolidation; (2) constructing the Chemistry and Metallurgy Research Replacement Nuclear Facility (CMRR-NF); (3) establishing shared user facilities to more cost effectively manage high-value, experimental, computational and production capabilities; and (4) modernizing enduring facilities while reducing the excess facility footprint. Th is TYSP is viewed by the Laboratory as a vital planning tool to develop an effi cient and responsive infrastructure. Long range facility and infrastructure development planning are critical to assure sustainment and modernization. Out-year re-investment is essential for sustaining existing facilities, and will be re-evaluated on an annual basis. At the same time, major modernization projects will require new line-item funding. This document is, in essence, a roadmap that defines a path forward for the Laboratory to modernize, streamline, consolidate, and sustain its infrastructure to meet its national security mission.« less

  17. Computed tomographic evidence of atherosclerosis in the mummified remains of humans from around the world.

    PubMed

    Thompson, Randall C; Allam, Adel H; Zink, Albert; Wann, L Samuel; Lombardi, Guido P; Cox, Samantha L; Frohlich, Bruno; Sutherland, M Linda; Sutherland, James D; Frohlich, Thomas C; King, Samantha I; Miyamoto, Michael I; Monge, Janet M; Valladolid, Clide M; El-Halim Nur El-Din, Abd; Narula, Jagat; Thompson, Adam M; Finch, Caleb E; Thomas, Gregory S

    2014-06-01

    Although atherosclerosis is widely thought to be a disease of modernity, computed tomographic evidence of atherosclerosis has been found in the bodies of a large number of mummies. This article reviews the findings of atherosclerotic calcifications in the remains of ancient people-humans who lived across a very wide span of human history and over most of the inhabited globe. These people had a wide range of diets and lifestyles and traditional modern risk factors do not thoroughly explain the presence and easy detectability of this disease. Nontraditional risk factors such as the inhalation of cooking fire smoke and chronic infection or inflammation might have been important atherogenic factors in ancient times. Study of the genetic and environmental risk factors for atherosclerosis in ancient people may offer insights into this common modern disease. Copyright © 2014 World Heart Federation (Geneva). Published by Elsevier B.V. All rights reserved.

  18. The Capability Portfolio Analysis Tool (CPAT): A Mixed Integer Linear Programming Formulation for Fleet Modernization Analysis (Version 2.0.2).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waddell, Lucas; Muldoon, Frank; Henry, Stephen Michael

    In order to effectively plan the management and modernization of their large and diverse fleets of vehicles, Program Executive Office Ground Combat Systems (PEO GCS) and Program Executive Office Combat Support and Combat Service Support (PEO CS&CSS) commis- sioned the development of a large-scale portfolio planning optimization tool. This software, the Capability Portfolio Analysis Tool (CPAT), creates a detailed schedule that optimally prioritizes the modernization or replacement of vehicles within the fleet - respecting numerous business rules associated with fleet structure, budgets, industrial base, research and testing, etc., while maximizing overall fleet performance through time. This paper contains a thor-more » ough documentation of the terminology, parameters, variables, and constraints that comprise the fleet management mixed integer linear programming (MILP) mathematical formulation. This paper, which is an update to the original CPAT formulation document published in 2015 (SAND2015-3487), covers the formulation of important new CPAT features.« less

  19. Experimental Evidence on the Effects of Home Computers on Academic Achievement among Schoolchildren. National Poverty Center Working Paper Series #13-02

    ERIC Educational Resources Information Center

    Fairlie, Robert W.; Robinson, Jonathan

    2013-01-01

    Computers are an important part of modern education, yet large segments of the population--especially low-income and minority children--lack access to a computer at home. Does this impede educational achievement? We test this hypothesis by conducting the largest-ever field experiment involving the random provision of free computers for home use to…

  20. A Computer Model for Red Blood Cell Chemistry

    DTIC Science & Technology

    1996-10-01

    5012. 13. ABSTRACT (Maximum 200 There is a growing need for interactive computational tools for medical education and research. The most exciting...paradigm for interactive education is simulation. Fluid Mod is a simulation based computational tool developed in the late sixties and early seventies at...to a modern Windows, object oriented interface. This development will provide students with a useful computational tool for learning . More important

Top