Sample records for computer program named

  1. Evaluation of Farm Accounting Software. Improved Decision Making.

    ERIC Educational Resources Information Center

    Lovell, Ashley C., Comp.

    This guide contains information on 36 computer programs used for farm and ranch accounting. This information and assessment of software features were provided by the manufacturers and vendors. Information is provided on the following items, among others: program name, vendor's name and address, computer and operating system, type of accounting and…

  2. Catalog of Computer Programs Used in Undergraduate Geological Education.

    ERIC Educational Resources Information Center

    Burger, H. Robert

    1983-01-01

    Provides list of mineralogy, petrology, and geochemistry computer programs. Each entry includes a brief description, program name and language, availability of program listing, and source and/or reference. (JN)

  3. Evolvix BEST Names for semantic reproducibility across code2brain interfaces

    PubMed Central

    Scheuer, Katherine S.; Keel, Seth A.; Vyas, Vaibhav; Liblit, Ben; Hanlon, Bret; Ferris, Michael C.; Yin, John; Dutra, Inês; Pietsch, Anthony; Javid, Christine G.; Moog, Cecilia L.; Meyer, Jocelyn; Dresel, Jerdon; McLoone, Brian; Loberger, Sonya; Movaghar, Arezoo; Gilchrist‐Scott, Morgaine; Sabri, Yazeed; Sescleifer, Dave; Pereda‐Zorrilla, Ivan; Zietlow, Andrew; Smith, Rodrigo; Pietenpol, Samantha; Goldfinger, Jacob; Atzen, Sarah L.; Freiberg, Erika; Waters, Noah P.; Nusbaum, Claire; Nolan, Erik; Hotz, Alyssa; Kliman, Richard M.; Mentewab, Ayalew; Fregien, Nathan; Loewe, Martha

    2016-01-01

    Names in programming are vital for understanding the meaning of code and big data. We define code2brain (C2B) interfaces as maps in compilers and brains between meaning and naming syntax, which help to understand executable code. While working toward an Evolvix syntax for general‐purpose programming that makes accurate modeling easy for biologists, we observed how names affect C2B quality. To protect learning and coding investments, C2B interfaces require long‐term backward compatibility and semantic reproducibility (accurate reproduction of computational meaning from coder‐brains to reader‐brains by code alone). Semantic reproducibility is often assumed until confusing synonyms degrade modeling in biology to deciphering exercises. We highlight empirical naming priorities from diverse individuals and roles of names in different modes of computing to show how naming easily becomes impossibly difficult. We present the Evolvix BEST (Brief, Explicit, Summarizing, Technical) Names concept for reducing naming priority conflicts, test it on a real challenge by naming subfolders for the Project Organization Stabilizing Tool system, and provide naming questionnaires designed to facilitate C2B debugging by improving names used as keywords in a stabilizing programming language. Our experiences inspired us to develop Evolvix using a flipped programming language design approach with some unexpected features and BEST Names at its core. PMID:27918836

  4. 76 FR 49753 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-11

    ... Defense. DHA 14 System name: Computer/Electronics Accommodations Program for People with Disabilities... with ``Computer/Electronic Accommodations Program.'' System location: Delete entry and replace with ``Computer/Electronic Accommodations Program, Skyline 5, Suite 302, 5111 Leesburg Pike, Falls Church, VA...

  5. Computers in Engineering Teaching.

    ERIC Educational Resources Information Center

    Rushby, N. J.

    This bibliography cites 26 books, papers, and reports dealing with various uses of computers in engineering education; and describes several computer programs available for use in teaching aeronautical, chemical, civil, electrical and electronic, mechanical, and nuclear engineering. Each computer program entry is presented by name, author,…

  6. Index to Computer Assisted Instruction.

    ERIC Educational Resources Information Center

    Lekan, Helen A., Ed.

    The computer assisted instruction (CAI) programs and projects described in this index are listed by subject matter. The index gives the program name, author, source, description, prerequisites, level of instruction, type of student, average completion time, logic and program, purpose for which program was designed, supplementary…

  7. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  8. Evolvix BEST Names for semantic reproducibility across code2brain interfaces.

    PubMed

    Loewe, Laurence; Scheuer, Katherine S; Keel, Seth A; Vyas, Vaibhav; Liblit, Ben; Hanlon, Bret; Ferris, Michael C; Yin, John; Dutra, Inês; Pietsch, Anthony; Javid, Christine G; Moog, Cecilia L; Meyer, Jocelyn; Dresel, Jerdon; McLoone, Brian; Loberger, Sonya; Movaghar, Arezoo; Gilchrist-Scott, Morgaine; Sabri, Yazeed; Sescleifer, Dave; Pereda-Zorrilla, Ivan; Zietlow, Andrew; Smith, Rodrigo; Pietenpol, Samantha; Goldfinger, Jacob; Atzen, Sarah L; Freiberg, Erika; Waters, Noah P; Nusbaum, Claire; Nolan, Erik; Hotz, Alyssa; Kliman, Richard M; Mentewab, Ayalew; Fregien, Nathan; Loewe, Martha

    2017-01-01

    Names in programming are vital for understanding the meaning of code and big data. We define code2brain (C2B) interfaces as maps in compilers and brains between meaning and naming syntax, which help to understand executable code. While working toward an Evolvix syntax for general-purpose programming that makes accurate modeling easy for biologists, we observed how names affect C2B quality. To protect learning and coding investments, C2B interfaces require long-term backward compatibility and semantic reproducibility (accurate reproduction of computational meaning from coder-brains to reader-brains by code alone). Semantic reproducibility is often assumed until confusing synonyms degrade modeling in biology to deciphering exercises. We highlight empirical naming priorities from diverse individuals and roles of names in different modes of computing to show how naming easily becomes impossibly difficult. We present the Evolvix BEST (Brief, Explicit, Summarizing, Technical) Names concept for reducing naming priority conflicts, test it on a real challenge by naming subfolders for the Project Organization Stabilizing Tool system, and provide naming questionnaires designed to facilitate C2B debugging by improving names used as keywords in a stabilizing programming language. Our experiences inspired us to develop Evolvix using a flipped programming language design approach with some unexpected features and BEST Names at its core. © 2016 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  9. Index to Computer Based Learning.

    ERIC Educational Resources Information Center

    Hoye, Robert E., Ed.; Wang, Anastasia C., Ed.

    The computer-based programs and projects described in this index are listed under 98 different subject matter fields. Descrptions of programs include information on: subject field, program name and number, author, source, the program's curriculum content, prerequisites, level of instruction, type of student for which it is intended, total hours of…

  10. Developments in REDES: The rocket engine design expert system

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth O.

    1990-01-01

    The Rocket Engine Design Expert System (REDES) is being developed at the NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP, a nozzle design program named RAO, a regenerative cooling channel performance evaluation code named RTE, and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES is built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.

  11. Developments in REDES: The Rocket Engine Design Expert System

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth O.

    1990-01-01

    The Rocket Engine Design Expert System (REDES) was developed at NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP; a nozzle design program named RAO; a regenerative cooling channel performance evaluation code named RTE; and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES was built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.

  12. Program manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 1 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System (SEPS) computer program is considered in terms of the program manual, programmer guide, and program utilization. The main objective is to provide the information necessary to interpret and use the routines comprising the SEPS program. Subroutine descriptions including the name, purpose, method, variable definitions, and logic flow are presented.

  13. Computers in Astronomy: Astronomy on an Apple Macintosh.

    ERIC Educational Resources Information Center

    Mosley, John E.

    1987-01-01

    Presents a review of computer programs written for the Apple Macintosh computer that teach astronomy. Reviews general programs, along with some which deal more specifically with sky travel, star charting, the solar system, Halley's Comet, and stargazing. Includes the name and address of each producer. (TW)

  14. Computational Understanding: Analysis of Sentences and Context

    DTIC Science & Technology

    1974-05-01

    Computer Science Department Stanford, California 9430b 10- PROGRAM ELEMENT. PROJECT. TASK AREA « WORK UNIT NUMBERS II. CONTROLLING OFFICE NAME...these is the need tor programs that can respond in useful ways to information expressed in a natural language. However a computational understanding...buying structure because "Mary" appears where it does. But the time for analysis was rarely over five seconds of computer time, when the Lisp program

  15. Teaching Pascal's Triangle from a Computer Science Perspective

    ERIC Educational Resources Information Center

    Skurnick, Ronald

    2004-01-01

    Pascal's Triangle is named for the seventeenth-century French philosopher and mathematician Blaise Pascal (the same person for whom the computer programming language is named). Students are generally introduced to Pascal's Triangle in an algebra or precalculus class in which the Binomial Theorem is presented. This article, presents a new method…

  16. A Drawing and Multi-Representational Computer Environment for Beginners' Learning of Programming Using C: Design and Pilot Formative Evaluation

    ERIC Educational Resources Information Center

    Kordaki, Maria

    2010-01-01

    This paper presents both the design and the pilot formative evaluation study of a computer-based problem-solving environment (named LECGO: Learning Environment for programming using C using Geometrical Objects) for the learning of computer programming using C by beginners. In its design, constructivist and social learning theories were taken into…

  17. Confidence Region for the Evaluation of HF DF Single Site Location Systems.

    DTIC Science & Technology

    1983-09-02

    CONTRACT ORt GRANT NUMBER(@) M.H. Reilly and J. Coran S. PERFORMING ORGANIZATION NAME AND ADDRESS WD PROGRAM ELEMENT. PROJECTAS Naval Research...1 DETERMINATION OF THE CONFIDENCE REGION....................2 COMPUTER PROGRAM FOR THE CONFIDENCE ELLIPSE..............5 EXAMPLES OF COMPUTER... PROGRAM OUTPUT......................6 DISCUSSION ................................................... 7 ACKNOWLEDGMENTS

  18. Hop, Skip and Jump: Animation Software.

    ERIC Educational Resources Information Center

    Eiser, Leslie

    1986-01-01

    Discusses the features of animation software packages, reviewing eight commercially available programs. Information provided for each program includes name, publisher, current computer(s) required, cost, documentation, input device, import/export capabilities, printing possibilities, what users can originate, types of image manipulation possible,…

  19. Connectionist Models and Parallelism in High Level Vision.

    DTIC Science & Technology

    1985-01-01

    GRANT NUMBER(s) Jerome A. Feldman N00014-82-K-0193 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENt. PROJECT, TASK Computer Science...Connectionist Models 2.1 Background and Overviev % Computer science is just beginning to look seriously at parallel computation : it may turn out that...the chair. The program includes intermediate level networks that compute more complex joints and ones that compute parallelograms in the image. These

  20. THREED: A computer program for three dimensional transformation of coordinates. [in lunar photo triangulation mapping

    NASA Technical Reports Server (NTRS)

    Wong, K. W.

    1974-01-01

    Program THREED was developed for the purpose of a research study on the treatment of control data in lunar phototriangulation. THREED is the code name of a computer program for performing absolute orientation by the method of three-dimensional projective transformation. It has the capability of performing complete error analysis on the computed transformation parameters as well as the transformed coordinates.

  1. Task-Based Assessment of Students' Computational Thinking Skills Developed through Visual Programming or Tangible Coding Environments

    ERIC Educational Resources Information Center

    Djambong, Takam; Freiman, Viktor

    2016-01-01

    While today's schools in several countries, like Canada, are about to bring back programming to their curricula, a new conceptual angle, namely one of computational thinking, draws attention of researchers. In order to understand the articulation between computational thinking tasks in one side, student's targeted skills, and the types of problems…

  2. Computer Series, 36: Bits and Pieces, 13.

    ERIC Educational Resources Information Center

    Moore, John W.

    1983-01-01

    Eleven computer/calculator programs (most are available from authors) are described. Topics include visualizing molecular vibrations, dynamic nuclear magnetic resonance spectra of two-spin systems, programming utilities for Apple II Plus, gas chromatography simulation for TRS-80, infrared spectra analysis on a calculator, naming chemical…

  3. ERDDAP - RESTful Web Services

    Science.gov Websites

    , graphs, or information about datasets). A RESTful web service (external link) - a URL that computer to get the same information in a more computer-program-friendly format like JSON (external link .jsonlKVP, where column names are on every row): Each column has a column name and one type of information

  4. Self-Administered Cued Naming Therapy: A Single-Participant Investigation of a Computer-Based Therapy Program Replicated in Four Cases

    ERIC Educational Resources Information Center

    Ramsberger, Gail; Marie, Basem

    2007-01-01

    Purpose: This study examined the benefits of a self-administered, clinician-guided, computer-based, cued naming therapy. Results of intense and nonintense treatment schedules were compared. Method: A single-participant design with multiple baselines across behaviors and varied treatment intensity for 2 trained lists was replicated over 4…

  5. Ada Compiler Validation Summary Report: Certificate Number: 901212I1. 11120 Tartan Inc., Tartan Ada VMS/960MC Version 4.0 VAXstation 3100 = Intel ICE960/25 on an VMS 5.2 Intel EXV80960MC Board

    DTIC Science & Technology

    1991-01-09

    5.2 (Target), 90121211 .11120 6. AUTHOR( S ) IABG-AVFT IOttobrunn, Federal Republic of Germany 7 PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) N-1...FEDERAL REPUBLIC OF GERMANY 9 SPONSORINGMONITORING AGENCY NAME( S ) AND ADDRESS( ES) 10. SPONSORING/ONITORING AGENCY Ada Joint Program Office REPORT NUMBER...Ada implementacion for which validation status is realized. Host Computer A computer system where Ada source programs are transformec System into

  6. Intelligent Computer-Aided Instruction for Medical Diagnosis

    PubMed Central

    Clancey, William J.; Shortliffe, Edward H.; Buchanan, Bruce G.

    1979-01-01

    An intelligent computer-aided instruction (ICAI) program, named GUIDON, has been developed for teaching infectious disease diagnosis.* ICAI programs use artificial intelligence techniques for representing both subject material and teaching strategies. This paper briefly outlines the difference between traditional instructional programs and ICAI. We then illustrate how GUIDON makes contributions in areas important to medical CAI: interacting with the student in a mixed-initiative dialogue (including the problems of feedback and realism), teaching problem-solving strategies, and assembling a computer-based curriculum.

  7. Computer Program for Steady Transonic Flow over Thin Airfoils by Finite Elements

    DTIC Science & Technology

    1975-10-01

    COMPUTER PROGRAM FOR STEADY JJ TRANSONIC FLOW OVER THIN AIRFOILS BY g FINITE ELEMENTS • *q^^ r ̂ c HUNTSVILLE RESEARCH & ENGINEERING CENTER...jglMMi B Jun’ INC ORGANIMTION NAME ANO ADDRESS Lö^kfteed Missiles & Space Company, Inc. Huntsville Research & Engineering Center,^ Huntsville, Alab...This report was prepared by personnel in the Computational Mechamcs Section of the Lockheed Missiles fc Space Company, Inc.. Huntsville Research

  8. Computer Programs to Display and Modify Data in Geographic Coordinates and Methods to Transfer Positions to and from Maps, with Applications to Gravity Data Processing, Global Positioning Systems, and 30-Meter Digital Elevation Models

    USGS Publications Warehouse

    Plouff, Donald

    1998-01-01

    Computer programs were written in the Fortran language to process and display gravity data with locations expressed in geographic coordinates. The programs and associated processes have been tested for gravity data in an area of about 125,000 square kilometers in northwest Nevada, southeast Oregon, and northeast California. This report discusses the geographic aspects of data processing. Utilization of the programs begins with application of a template (printed in PostScript format) to transfer locations obtained with Global Positioning Systems to and from field maps and includes a 5-digit geographic-based map naming convention for field maps. Computer programs, with source codes that can be copied, are used to display data values (printed in PostScript format) and data coverage, insert data into files, extract data from files, shift locations, test for redundancy, and organize data by map quadrangles. It is suggested that 30-meter Digital Elevation Models needed for gravity terrain corrections and other applications should be accessed in a file search by using the USGS 7.5-minute map name as a file name, for example, file '40117_B8.DEM' contains elevation data for the map with a southeast corner at lat 40? 07' 30' N. and lon 117? 52' 30' W.

  9. Optimal design of a combustion chamber of gas turbine engine by a Combustion chamber 1D-2D computer program

    NASA Astrophysics Data System (ADS)

    Aleksandrov, Y. B.; Mingazov, B. G.

    2017-09-01

    The paper shows a method of modeling and optimization of processes in combustion chambers of gas turbine engines using a computer program developed by a team at the Department of Jet Engines and Power Plants (DJEPP) of Technical University named after A N Tupolev KNRTU-KAI.

  10. Goal-seismic computer programs in BASIC: Part I; Store, plot, and edit array data

    USGS Publications Warehouse

    Hasbrouck, Wilfred P.

    1979-01-01

    Processing of geophysical data taken with the U.S. Geological Survey's coal-seismic system is done with a desk-top, stand-alone computer. Programs for this computer are written in an extended BASIC language specially augmented for acceptance by the Tektronix 4051 Graphic System. This report presents five computer programs used to store, plot, and edit array data for the line, cross, and triangle arrays commonly employed in our coal-seismic investigations. * Use of brand names in this report is for descriptive purposes only and does not constitute endorsement by the U.S. Geological Survey.

  11. Fluid-Structure Interaction Using Retarded Potential and ABAQUS

    DTIC Science & Technology

    1992-08-19

    APPLICATION A retarded potential (RP) capability has been coupled to the ABAQUS program, through the DLOAD user written subroutine , to form ABAQUS - RP...and ABAQUS C. T. DYKA Geo-Centers, Inc. Fort Washington, MD 20744 and M. A. TAMM Computer Operations and Communications Branch Research Computation... ABAQUS 63569N 6. AUTHOR(S) 6604 C. T. Dyka* and M. A. Tamm 7. PERFORMING ORGANIZATION NAME(S) AND AOORESS(ES) b. PERFORMING ORGANIZATION REPORT NUMBER

  12. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  13. Computer program for maintenance of individual animal records in a nonhuman primate colony.

    PubMed

    Kuehl, T J; Dukelow, W R

    1977-06-01

    A computer program was developed to maintain animal records for a nonhuman primate colony used in research. The program was designed for use with an existing laboratory notebook system. The computer program identifies each notebook entry containing information about each animal and keeps other information, including animal name, sex, species, projects to which the animal is assigned, location of the animal, dates and body weights. The program is interactive and easy to use. Information stored in the system is readily accessible to all investigators using the animals. In 17 months of use, 1382 master file entries were developed for 113 monkeys.

  14. BLISS: A Computer Program for the Protection of Blood Donors

    DTIC Science & Technology

    1982-06-28

    EXAMPLE 5 LIST OUTPUT -OC: I L. SECU F I T NO.: 111-11-1111 NAME: ALFRED RENTA NO. OF DONATIONS: 4 VDISK; 1 DONATION NO. : 1 DATE: 81-13-81 METHOD OF...DISK # I N-.’ SOCIAL SECURITY NO.: 111-11-1111 NAME: ALFRED RENTA .,, DONATION DATE: 04-23-81 -p SOCIAL SECURITY NO.: 222-22-2222 NAME: MILO BENDER

  15. Software List.

    ERIC Educational Resources Information Center

    Computers in Chemical Education Newsletter, 1984

    1984-01-01

    Lists and briefly describes computer programs recently added to those currently available from Project SERAPHIM. Program name, subject, hardware, author, supplier, and current cost are provided in separate listings for Apple, Atari, Pet, VIC-20, TRS-80, and IBM-PC. (JN)

  16. Augmentation of Teaching Tools: Outsourcing the HSD Computing for SPSS Application

    ERIC Educational Resources Information Center

    Wang, Jianjun

    2010-01-01

    The widely-used Tukey's HSD index is not produced in the current version of SPSS (i.e., PASW Statistics, version 18), and a computer program named "HSD Calculator" has been chosen to amend this problem. In comparison to hand calculation, this program application does not require table checking, which eliminates potential concern on the size of a…

  17. Computer-assisted instruction to prevent early reading difficulties in students at risk for dyslexia: Outcomes from two instructional approaches.

    PubMed

    Torgesen, Joseph K; Wagner, Richard K; Rashotte, Carol A; Herron, Jeannine; Lindamood, Patricia

    2010-06-01

    The relative effectiveness of two computer-assisted instructional programs designed to provide instruction and practice in foundational reading skills was examined. First-grade students at risk for reading disabilities received approximately 80 h of small-group instruction in four 50-min sessions per week from October through May. Approximately half of the instruction was delivered by specially trained teachers to prepare students for their work on the computer, and half was delivered by the computer programs. At the end of first grade, there were no differences in student reading performance between students assigned to the different intervention conditions, but the combined-intervention students performed significantly better than control students who had been exposed to their school's normal reading program. Significant differences were obtained for phonemic awareness, phonemic decoding, reading accuracy, rapid automatic naming, and reading comprehension. A follow-up test at the end of second grade showed a similar pattern of differences, although only differences in phonemic awareness, phonemic decoding, and rapid naming remained statistically reliable.

  18. A computer program for calculation of doses and prices of injectable medications based on body weight or body surface area

    PubMed Central

    2004-01-01

    Abstract A computer program (CalcAnesth) was developed with Visual Basic for the purpose of calculating the doses and prices of injectable medications on the basis of body weight or body surface area. The drug names, concentrations, and prices are loaded from a drug database. This database is a simple text file, that the user can easily create or modify. The animal names and body weights can be loaded from a similar database. After typing the dose and the units into the user interface, the results will be automatically displayed. The program is able to open and save anesthetic protocols, and export or print the results. This CalcAnesth program can be useful in clinical veterinary anesthesiology and research. The rationale for dosing on the basis of body surface area is also discussed in this article. PMID:14979437

  19. Project SERAPHIM Report.

    ERIC Educational Resources Information Center

    Moore, John W.

    1983-01-01

    Lists and briefly describes computer programs recently added to those currently available from Project SERAPHIM. Program name, subject, hardware, author, supplier, and cost are provided in separate listings for Apple, PET, TRS-80 I or III, IBM, VIC-20, TERAK, and PDP-11 microcomputers. Includes corrections for two current Apple programs. (JN)

  20. Development of Alabama Resources Information System (ARIS)

    NASA Technical Reports Server (NTRS)

    Herring, B. E.; Vachon, R. I.

    1976-01-01

    A formal, organized set of information concerning the development status of the Alabama Resources Information System (ARIS) as of September 1976 is provided. A series of computer source language programs, and flow charts related to each of the computer programs to provide greater ease in performing future change are presented. Listings of the variable names, and their meanings, used in the various source code programs, and copies of the various user manuals which were prepared through this time are given.

  1. [Hepatox: database on hepatotoxic drugs].

    PubMed

    Quinton, A; Latry, P; Biour, M

    1993-01-01

    Hepatox is a data base on the hepatotoxic drugs file published every year in Gastroentérologie Clinique et Biologique. The program was developed under Omnis 7 for Apple computers, and under Visual Basic Professional Toolkit and Code Base for IBM PC and compatibles computers. The data base includes forms of 866 drugs identified by their approved name and those of their 1,300 corresponding proprietary names in France; drugs are distributed among 104 pharmacological classes. It is possible to have instantaneously access to the card of a drug identified by its approved name. Acceding to a drug identified by its proprietary name gives a list of the approved name of its components; going from a name of this list to the correspondent card of hepatoxicity is immediate. It is easy to extract lists of drugs responsible of a type of hepatic injury, and a table of types of hepatic injuries induced by the drugs of a pharmacological class.

  2. The Role of Academic Computer Departments in the Uses of Computers in the Undergraduate Curricula at the Two-Year College Level.

    ERIC Educational Resources Information Center

    Little, Joyce Currie

    Academic computer departments, whether called by this name or by others such as the department of computer science or data programing, can be of great assistance to other departments in the two-year college. Faculty in other departments need to know about computer applications in their fields, require assistance in the development of curriculum…

  3. Acoustic environmental accuracy requirements for response determination

    NASA Technical Reports Server (NTRS)

    Pettitt, M. R.

    1983-01-01

    A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.

  4. Relational Programming.

    DTIC Science & Technology

    1983-09-01

    be illustrated by example. If ’z’ is the name of an individual and ’C’ is the name of a class (set), then ’ zEC ’ means that the individual denoted by ’z...will abbreviate this un z. Conversely, if C is a single element class, then un-1 C selects the unique member of that class: un-1C = Lz( zEC ). It is...Professor Peter Henderson1 Department of Computer Science SUNY at Stony Brook Long Island, NY 11794 Dr. Olle Olsson Department of Computer Science

  5. MAGNA (Materially and Geometrically Nonlinear Analysis). Part I. Finite Element Analysis Manual.

    DTIC Science & Technology

    1982-12-01

    provided for operating the program, modifying storage caoacity, preparing input data, estimating computer run times , and interpreting the output...7.1.3 Reserved File Names 7.1.16 7.1.4 Typical Execution Times on CDC Computers 7.1.18 7.2 CRAY PROGRAM VERSION 7.2.1 7.2.1 Job Control Language 7.2.1...7.2.2 Modification of Storage Capacity 7.2.8 7.2.3 Execution Times on the CRAY-I Computer 7.2.12 7.3 VAX PROGRAM VERSION 7.3.1 8 INPUT DATA 8.0.1 8.1

  6. General 3D Airborne Antenna Radiation Pattern Code Users Manual.

    DTIC Science & Technology

    1983-02-01

    AD-A 30 359 GENERAL 3D AIRBORNEANTENNA RADIATION PATTERN CODE USERS MANUA (U) OHIO STATE UNIV COLUMBUS ELECTROSCIENCE LAB H HCHUNGET AL FEB 83 RADC...F30602-79-C-0068 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASKAREA A WORK UNIT NUMEEfRS The Ohio State University...Computer Program 20, ABSTRACT (Coaffivme on reverse side it ntecessar a" 141etifIr &V block mUbef) This report describes a computer program and how it may

  7. Basic EMC Technology Advancement for C(3) Systems - SHIELD. Volume IV B. A Digital Computer Program for Computing Crosstalk between Shielded Cables

    DTIC Science & Technology

    1982-11-01

    your organization , please notify RADC OBCT) Griffiss AFB NY 13441. This will assist us in maintaining a current mailing list. Do not return copies of...RMING ORGANIZATION NAME r AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK Southeastern Center for Electrical AREA6WORKUNITNUMBERS Engineering Education...The program requires that the input data groups be organized as shown in Table 1 where the number of unshielded wires is U and the number of shielded

  8. An Air Force Guide to Computer Program Configuration Management

    DTIC Science & Technology

    1977-08-01

    Various other constraints may’also prevent full completion of the Part I specification for a complex missi-on CPCI in all of its typically massive detail...specifications for developmental CPCIs. Relations of documentation to actual computer program modules is often such as to prevent -ready identification and...names and organizational alignments of the contractor activities ;,.ay vary, but the functions should be represented. The prgram office CCB is the

  9. Theoretical basis of the DOE-2 building energy use analysis program

    NASA Astrophysics Data System (ADS)

    Curtis, R. B.

    1981-04-01

    A user-oriented, public domain, computer program was developed that will enable architects and engineers to perform design and retrofit studies of the energy-use of buildings under realistic weather conditions. The DOE-2.1A has been named by the US DOE as the standard evaluation technique for the Congressionally mandated building energy performance standards (BEPS). A number of program design decisions were made that determine the breadth of applicability of DOE-2.1. Such design decisions are intrinsic to all building energy use analysis computer programs and determine the types of buildings or the kind of HVAC systems that can be modeled. In particular, the weighting factor method used in DOE-2 has both advantages and disadvantages relative to other computer programs.

  10. Program Office Guide to Ada. Edition 1

    DTIC Science & Technology

    1986-09-17

    publication. MARK V. ZIEMBA , 2Lt, USAF Project Officer, Software Engineering Tools & Methods ARTHUR G. DECELLES, Capt, USAF Program Manager, Computer...UNLIMITED G3 SAME AS RPT D DTIC USERS 21 ABSTRACT SECURITY CLASSIFICATION UNCLASSIFIED 22a. NAME OF RESPONSIBLE INDIVIDUAL M.V. Ziemba

  11. A Model for Integrating New Technologies into Pre-Service Teacher Training Programs Ajman University (A Case Study)

    ERIC Educational Resources Information Center

    Shaqour, Ali Zuhdi H.

    2005-01-01

    This study introduces a "Technology Integration Model" for a learning environment utilizing constructivist learning principles and integrating new technologies namely computers and the Internet into pre-service teacher training programs. The technology integrated programs and learning environments may assist learners to gain experiences…

  12. Ballistic Deflection Transistors for THz Amplification

    DTIC Science & Technology

    2016-05-09

    Computer Engineering Rabi Sherstha 0.00 Electrical and Computer Engineering 0.07 7 NAME Total Number: Grahan Jensen Fei Song 2 ...... ...... Sub... Rabi Sherstha spent entire summers working on ARO-related projects under the supervision of Prof. Sobolewski. They were supported by the UR Undergraduate Research Discover Program.

  13. Xinyinqin: a computer-based heart sound simulator.

    PubMed

    Zhan, X X; Pei, J H; Xiao, Y H

    1995-01-01

    "Xinyinqin" is the Chinese phoneticized name of the Heart Sound Simulator (HSS). The "qin" in "Xinyinqin" is the Chinese name of a category of musical instruments, which means that the operation of HSS is very convenient--like playing an electric piano with the keys. HSS is connected to the GAME I/O of an Apple microcomputer. The generation of sound is controlled by a program. Xinyinqin is used as a teaching aid of Diagnostics. It has been applied in teaching for three years. In this demonstration we will introduce the following functions of HSS: 1) The main program has two modules. The first one is the heart auscultation training module. HSS can output a heart sound selected by the student. Another program module is used to test the student's learning condition. The computer can randomly simulate a certain heart sound and ask the student to name it. The computer gives the student's answer an assessment: "correct" or "incorrect." When the answer is incorrect, the computer will output that heart sound again for the student to listen to; this process is repeated until she correctly identifies it. 2) The program is convenient to use and easy to control. By pressing the S key, it is able to output a slow heart rate until the student can clearly identify the rhythm. The heart rate, like the actual rate of a patient, can then be restored by hitting any key. By pressing the SPACE BAR, the heart sound output can be stopped to allow the teacher to explain something to the student. The teacher can resume playing the heart sound again by hitting any key; she can also change the content of the training by hitting RETURN key. In the future, we plan to simulate more heart sounds and incorporate relevant graphs.

  14. MASTRE trajectory code update to automate flight trajectory design, performance predictions, and vehicle sizing for support of shuttle and shuttle derived vehicles: Programmers manual

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The information required by a programmer using the Minimum Hamiltonian AScent Trajectory Evaluation (MASTRE) Program is provided. This document enables the programmer to either modify the program or convert the program to computers other than the VAX computer. Documentation for each subroutine or function based on providing the definitions of the variables and a source listing are included. Questions concerning the equations, techniques, or input requirements should be answered by either the Engineering or User's manuals. Three appendices are also included which provide a listing of the Root-Sum-Square (RSS) program, a listing of subroutine names and definitions used in the MASTRE User Friendly Interface Program, and listing of the subroutine names and definitions used in the Mass Properties Program. The RSS Program is used to aid in the performance of dispersion analyses. The RSS program reads a file generated by the MASTRE Program, calculates dispersion parameters, and generates output tables and output plot files. UFI Program provides a screen user interface to aid the user in providing input to the model. The Mass Properties Program defines the mass properties data for the MASTRE program through the use of user interface software.

  15. Mathematics Programming on the Apple II and IBM PC.

    ERIC Educational Resources Information Center

    Myers, Roy E.; Schneider, David I.

    1987-01-01

    Details the features of BASIC used in mathematics programming and provides the information needed to translate between the Apple II and IBM PC computers. Discusses inputing a user-defined function, setting scroll windows, displaying subscripts and exponents, variable names, mathematical characters and special symbols. (TW)

  16. Interactive debug program for evaluation and modification of assembly-language software

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.

    1979-01-01

    An assembly-language debug program written for the Honeywell HDC-601 and DDP-516/316 computers is described. Names and relative addressing to improve operator-machine interaction are used. Features include versatile display, on-line assembly, and improved program execution and analysis. The program is discussed from both a programmer's and an operator's standpoint. Functional diagrams are included to describe the program, and each command is illustrated.

  17. Discrete Tchebycheff orthonormal polynomials and applications

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.

  18. Data-Dictionary-Editing Program

    NASA Technical Reports Server (NTRS)

    Cumming, A. P.

    1989-01-01

    Access to data-dictionary relations and attributes made more convenient. Data Dictionary Editor (DDE) application program provides more convenient read/write access to data-dictionary table ("descriptions table") via data screen using SMARTQUERY function keys. Provides three main advantages: (1) User works with table names and field names rather than with table numbers and field numbers, (2) Provides online access to definitions of data-dictionary keys, and (3) Provides displayed summary list that shows, for each datum, which data-dictionary entries currently exist for any specific relation or attribute. Computer program developed to give developers of data bases more convenient access to the OMNIBASE VAX/IDM data-dictionary relations and attributes.

  19. A Review of CEFA Software: Comprehensive Exploratory Factor Analysis Program

    ERIC Educational Resources Information Center

    Lee, Soon-Mook

    2010-01-01

    CEFA 3.02(Browne, Cudeck, Tateneni, & Mels, 2008) is a factor analysis computer program designed to perform exploratory factor analysis. It provides the main properties that are needed for exploratory factor analysis, namely a variety of factoring methods employing eight different discrepancy functions to be minimized to yield initial…

  20. Building Blocks. An Annotated Bibliography for Single Parent Programming.

    ERIC Educational Resources Information Center

    Wiley-Thomas, Cheryl, Comp.; Norden, Tamara, Ed.

    This booklet lists 645 books, articles, curriculum materials, computer software, and videos that educational professionals can use to develop programs for single parents (especially teen parents). Many of the listings are annotated; all contain information on author, title, publisher name and city, and date of publication or production. The…

  1. A computer graphics display and data compression technique

    NASA Technical Reports Server (NTRS)

    Teague, M. J.; Meyer, H. G.; Levenson, L. (Editor)

    1974-01-01

    The computer program discussed is intended for the graphical presentation of a general dependent variable X that is a function of two independent variables, U and V. The required input to the program is the variation of the dependent variable with one of the independent variables for various fixed values of the other. The computer program is named CRP, and the output is provided by the SD 4060 plotter. Program CRP is an extremely flexible program that offers the user a wide variety of options. The dependent variable may be presented in either a linear or a logarithmic manner. Automatic centering of the plot is provided in the ordinate direction, and the abscissa is scaled automatically for a logarithmic plot. A description of the carpet plot technique is given along with the coordinates system used in the program. Various aspects of the program logic are discussed and detailed documentation of the data card format is presented.

  2. Data Mining of Network Logs

    NASA Technical Reports Server (NTRS)

    Collazo, Carlimar

    2011-01-01

    The statement of purpose is to analyze network monitoring logs to support the computer incident response team. Specifically, gain a clear understanding of the Uniform Resource Locator (URL) and its structure, and provide a way to breakdown a URL based on protocol, host name domain name, path, and other attributes. Finally, provide a method to perform data reduction by identifying the different types of advertisements shown on a webpage for incident data analysis. The procedures used for analysis and data reduction will be a computer program which would analyze the URL and identify and advertisement links from the actual content links.

  3. Graphics Standards in the Computer-Aided Acquisition and Logistic Support (CALS) Program Fiscal Year 1989 Volume 2: MIL-D-28003 Revisions, CGM registration

    DTIC Science & Technology

    1990-05-01

    Naming in CALS U Based on the above rationale and trade studies, including a basic set of trademarked names in the CALS AP should be considered I during a...names for font lists. Based on the various trade studies in this report, including the one on font substitution below the following naming technique...COPYRIGHT SIGN 2/10 -FEMININE ORDINAL INDICATOR I 2/11 LEFT ANGLE QUOTATION MARK 2/12 NOr SIGN 2/13 SOFT HYPHEN 2/14 REGISTERED TRADE MARK SIGN m 2/15

  4. A Model for High Frequency Radar Auroral Clutter

    DTIC Science & Technology

    1980-03-01

    PROGRAM ELEMENT PROJECT. TA9K Deputy for Electronic Technology (HADCEEP) AREA A WORK UNIT Hansco AF13 02Fl II CONTROLLING OFFICE NAME AND ADDRESS W...region is sometimes referred to as the "polar cavity." 16 I wf, 2. RKlIO PROPA(ArION CONSIDERATIONS An essential element in computing the incidence and...called liaselgrove equations. 8 This is :iccomplished nume’rically by use of a computer program originally developed by ,lTnes .11rd later modified

  5. Computation by Bacteria

    DTIC Science & Technology

    2011-01-03

    NUMBER W911NF-08-1-0044 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER Robert H. Austin 5e. TASK NUMBER 5f. WORK...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) The Trustees of Princeton University 8. PERFORMING ORGANIZATION REPORT NUMBER...ORPA 4 New South Building PO Box 36 Princeton, NJ 08544-0036 9. SPONSORING / MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10

  6. Transparency in Distributed File Systems

    DTIC Science & Technology

    1989-01-01

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Computer Science Department AREA & WORK UNIT NUMBERS 734 Comouter Studies Bldc . University of...sistency control , file and director) placement, and file and directory migration in a way that pro- 3 vides full network transparency. This transparency...areas of naming, replication, con- sistency control , file and directory placement, and file and directory migration in a way that pro- 3 vides full

  7. Research at USAFA 2013

    DTIC Science & Technology

    2013-01-01

    9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11 . SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION...computer programs, this cadet team is developing this new type of model for a radio controlled aircraft, which will be verified through experimental...complement of researchers available to mentor and lead cadets in their development as engineers and officers,” said McLaughlin . 11 Research at USAFA

  8. Intelligent Computer Assisted Instruction (ICAI): Formative Evaluation of Two Systems

    DTIC Science & Technology

    1986-03-01

    appreciation .’.,-* for the power of computer technology. Interpretati on Yale students are a strikingly high performing group by traditional academic ...COMPUTER ASSISTED INSTRUCTION April 1984 - August 1985 (ICAI): FORMATIVE EVALUATION OF TWO SYSTEMS 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(*) S...956881 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK AREA & WORK UNIT NUMBERS Jet Propulsion Laboratory 2Q263743A794

  9. Flight dynamics analysis and simulation of heavy lift airships, volume 4. User's guide: Appendices

    NASA Technical Reports Server (NTRS)

    Emmen, R. D.; Tischler, M. B.

    1982-01-01

    This table contains all of the input variables to the three programs. The variables are arranged according to the name list groups in which they appear in the data files. The program name, subroutine name, definition and, where appropriate, a default input value and any restrictions are listed with each variable. The default input values are user supplied, not generated by the computer. These values remove a specific effect from the calculations, as explained in the table. The phrase "not used' indicates that a variable is not used in the calculations and are for identification purposes only. The engineering symbol, where it exists, is listed to assist the user in correlating these inputs with the discussion in the Technical Manual.

  10. Drawing Analogies between Logic Programming and Natural Language Argumentation Texts to Scaffold Learners' Understanding

    ERIC Educational Resources Information Center

    Ragonis, Noa; Shilo, Gila

    2014-01-01

    The paper presents a theoretical investigational study of the potential advantages that secondary school learners may gain from learning two different subjects, namely, logic programming within computer science studies and argumentation texts within linguistics studies. The study suggests drawing an analogy between the two subjects since they both…

  11. A Pointing Out and Naming Paradigm to Support Radiological Teaching and Case-Oriented Learning.

    ERIC Educational Resources Information Center

    Van Cleynenbreugel, J.; And Others

    1994-01-01

    The use of computer programs for authoring and presenting case materials in professional instruction in radiology is discussed. A workstation-based multimedia program for presenting and annotating images accompanied by both voice and text is described. Comments are also included on validity results and student response. (MSE)

  12. The numerical approach adopted in toba computer code for mass and heat transfer dynamic analysis of metal hydride hydrogen storage beds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Osery, I.A.

    1983-12-01

    Modelling studies of metal hydride hydrogen storage beds is a part of an extensive R and D program conducted in Egypt on hydrogen energy. In this context two computer programs; namely RET and RET1; have been developed. In RET computer program, a cylindrical conduction bed model is considered and an approximate analytical solution is used for solution of the associated mass and heat transfer problem. This problem is solved in RET1 computer program numerically allowing more flexibility in operating conditions but still limited to cylindrical configuration with only two alternatives for heat exchange; either fluid is passing through tubes imbeddedmore » in the solid alloy matrix or solid rods are surrounded by annular fluid tubes. The present computer code TOBA is more flexible and realistic. It performs the mass and heat transfer dynamic analysis of metal hydride storage beds using a variety of geometrical and operating alternatives.« less

  13. Integrating Requirements Engineering, Modeling, and Verification Technologies into Software and Systems Engineering

    DTIC Science & Technology

    2007-10-28

    Software Engineering, FASE󈧉, volume 3442 of Lecture Notes in Computer Science, pages 175--189. Springer, 2005. Andreas Bauer, Martin Leucker, and Jonathan ...of Personnel receiving masters degrees NAME Markus Strohmeier Gerrit Hanselmann Jonathan Streit Ernst Sassen 4Total Number: Names of personnel...developed and documented mainly within the master thesis by Jonathan Streit [Str06]: • Jonathan Streit. Development of a programming language like tem

  14. The Integrated Library System Design Concepts for a Complete Serials Control Subsystem.

    DTIC Science & Technology

    1984-08-20

    7AD-fl149 379 THE INTEGRTED LIBRARY SYSTEM DESIGN CONCEPTS FOR A 1/COMPLETE SERIALS CONTROL UBSYSTEM(U) ONLINE COMPUTER SYSTEMS INC GERMANTOWN MD 28...CONTROL SUBSYSTEM Presented to: The Pentagon Library The Pentagon Washington, DC 20310 Prepared by: Online Computer Systems, Inc. 20251 Century Blvd...MDA903-82-C-0535 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT, PROJECT, TASK AREA & WORK UNIT NUMBERS Online Computer Systems, Inc

  15. Computer program to assess impact of fatigue and fracture criteria on weight and cost of transport aircraft

    NASA Technical Reports Server (NTRS)

    Tanner, C. J.; Kruse, G. S.; Oman, B. H.

    1975-01-01

    A preliminary design analysis tool for rapidly performing trade-off studies involving fatigue, fracture, static strength, weight, and cost is presented. Analysis subprograms were developed for fatigue life, crack growth life, and residual strength; and linked to a structural synthesis module which in turn was integrated into a computer program. The part definition module of a cost and weight analysis program was expanded to be compatible with the upgraded structural synthesis capability. The resultant vehicle design and evaluation program is named VDEP-2. It is an accurate and useful tool for estimating purposes at the preliminary design stage of airframe development. A sample case along with an explanation of program applications and input preparation is presented.

  16. MAP - a mapping and analysis program for harvest planning

    Treesearch

    Robert N. Eli; Chris B. LeDoux; Penn A. Peters

    1984-01-01

    The Northeastern Forest Experiment Station and the Department of Civil Engineering at West Virginia University are cooperating in the development of a Mapping and Analysis Program, to be named MAP. The goal of this computer software package is to significantly improve the planning and harvest efficiency of small to moderately sized harvest units located in mountainous...

  17. A Validated Methodology for Determination of Laboratory Instrument Computer Interface Efficacy.

    DTIC Science & Technology

    1984-12-10

    Final report for period 12/15/83-12/05/84 Prepared for DTIC TRIMIS PROGRAM OFFICE ELECT I" 5401 Westbard Avenue JN 18 Bethesda, Maryland 20816 JN0 2...NAME AND ADDRESS 12. REPORT DATE *TRIMIS PROGRAM OFFICE December 10, 1984 6 5401 Westbard Avenue 13. NUMBEROF PAGES Bethesda. Maryland 20816 30 14

  18. csa2sac—A program for computing discharge from continuous slope-area stage data

    USGS Publications Warehouse

    Wiele, Stephen M.

    2015-12-17

    In addition to csa2sac, the SAC7 program is required. It is the same as the original SAC program, except that it is compiled for 64-bit Windows operating systems and has a slightly different command line input. It is available online (http://water.usgs.gov/software/SAC/) as part of the SACGUI installation program. The program name, “SAC7.exe,” is coded into csa2sac, and must not be changed.

  19. Conversion of the CALAP (Computer Aided Landform Analysis Program) Program from FORTRAN to DUCK.

    DTIC Science & Technology

    1986-09-01

    J’ DUCK artificial intelligence logic programming 20 AVrACT (Cthm m reerse stabN ameeaaW idelfr by block mbae) An expert advisor program named CALAP...original program was developed in FORTRAN on an HP- 1000, a mirticomputer. CALAP was reprogrammed in an Artificial Intelligence (AI) language called DUCK...the Artificial Intelligence Center, U.S. Army Engineer Topographic Laboratory, Fort Belvoir. Z" I. S. n- Page 1 I. Introduction An expert advisor

  20. Onward to Petaflops Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    With programs such as the US High Performance Computing and Communications Program (HPCCP), the attention of scientists and engineers worldwide has been focused on the potential of very high performance scientific computing, namely systems that are hundreds or thousands of times more powerful than those typically available in desktop systems at any given point in time. Extending the frontiers of computing in this manner has resulted in remarkable advances, both in computing technology itself and also in the various scientific and engineering disciplines that utilize these systems. Within the month or two, a sustained rate of 1 Tflop/s (also written 1 teraflops, or 10(exp 12) floating-point operations per second) is likely to be achieved by the 'ASCI Red' system at Sandia National Laboratory in New Mexico. With this objective in sight, it is reasonable to ask what lies ahead for high-end computing.

  1. Scaffold: Quantum Programming Language

    DTIC Science & Technology

    2012-07-24

    Europe, 2012. [8] B. Eastin and S . T. Flammia , “Q-circuit tutorial,” arXiv:quant-ph/0406003v2. [9] A. I. Faruque et al., “Scaffold: Quantum Programming...TITLE AND SUBTITLE Scaffold: Quantum Programming Language 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d...PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Princeton University,Department of Computer

  2. Computational Nanotribology of Nanometer Confined Liquid Films

    DTIC Science & Technology

    2012-02-29

    Nanotribology of Nanometer Confined Liquid Films 5b. GRANT NUMBER FA9550-08-1-0214 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT...NUMBER Yongsheng Leng & Peter T. Cummings 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES...NAME( S ) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM( S ) Joycelyn Harrison AFOSR/RSA 875 North Randolph Street 11. SPONSOR/MONITOR’S REPORT

  3. Multi-User Performance Issues in Wireless Impulse Radio Networks

    DTIC Science & Technology

    2004-01-01

    Performance Issues in Wireless Impulse Radio Networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) North Carolina State University,Department of...Electrical and Computer Engineering,Raleigh,NC,27695 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10

  4. Hybrid Memory Management for Parallel Execution of Prolog on Shared Memory Multiprocessors

    DTIC Science & Technology

    1990-06-01

    organizing data to increase locality. The stack structure exhibits greater locality than the heap structure. Tradeoff decisions can also be made on...PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES...University of California at Berkeley,Department of Electrical Engineering and Computer Sciences,Berkeley,CA,94720 8. PERFORMING ORGANIZATION REPORT

  5. Extreme Programming: Maestro Style

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2009-01-01

    "Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme-programming practices. The single most influential of these factors is that continuous interaction between customers and programmers is not feasible.

  6. Functional Programming in Computer Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Loren James; Davis, Marion Kei

    We explore functional programming through a 16-week internship at Los Alamos National Laboratory. Functional programming is a branch of computer science that has exploded in popularity over the past decade due to its high-level syntax, ease of parallelization, and abundant applications. First, we summarize functional programming by listing the advantages of functional programming languages over the usual imperative languages, and we introduce the concept of parsing. Second, we discuss the importance of lambda calculus in the theory of functional programming. Lambda calculus was invented by Alonzo Church in the 1930s to formalize the concept of effective computability, and every functionalmore » language is essentially some implementation of lambda calculus. Finally, we display the lasting products of the internship: additions to a compiler and runtime system for the pure functional language STG, including both a set of tests that indicate the validity of updates to the compiler and a compiler pass that checks for illegal instances of duplicate names.« less

  7. Laboratory manual: mineral X-ray diffraction data retrieval/plot computer program

    USGS Publications Warehouse

    Hauff, Phoebe L.; VanTrump, George

    1976-01-01

    The Mineral X-Ray Diffraction Data Retrieval/Plot Computer Program--XRDPLT (VanTrump and Hauff, 1976a) is used to retrieve and plot mineral X-ray diffraction data. The program operates on a file of mineral powder diffraction data (VanTrump and Hauff, 1976b) which contains two-theta or 'd' values, and intensities, chemical formula, mineral name, identification number, and mineral group code. XRDPLT is a machine-independent Fortran program which operates in time-sharing mode on a DEC System i0 computer and the Gerber plotter (Evenden, 1974). The program prompts the user to respond from a time-sharing terminal in a conversational format with the required input information. The program offers two major options: retrieval only; retrieval and plot. The first option retrieves mineral names, formulas, and groups from the file by identification number, by the mineral group code (a classification by chemistry or structure), or by searches based on the formula components. For example, it enables the user to search for minerals by major groups (i.e., feldspars, micas, amphiboles, oxides, phosphates, carbonates) by elemental composition (i.e., Fe, Cu, AI, Zn), or by a combination of these (i.e., all copper-bearing arsenates). The second option retrieves as the first, but also plots the retrieved 2-theta and intensity values as diagrammatic X-ray powder patterns on mylar sheets or overlays. These plots can be made using scale combinations compatible with chart recorder diffractograms and 114.59 mm powder camera films. The overlays are then used to separate or sieve out unrelated minerals until unknowns are matched and identified.

  8. Life and dynamic capacity modeling for aircraft transmissions

    NASA Technical Reports Server (NTRS)

    Savage, Michael

    1991-01-01

    A computer program to simulate the dynamic capacity and life of parallel shaft aircraft transmissions is presented. Five basic configurations can be analyzed: single mesh, compound, parallel, reverted, and single plane reductions. In execution, the program prompts the user for the data file prefix name, takes input from a ASCII file, and writes its output to a second ASCII file with the same prefix name. The input data file includes the transmission configuration, the input shaft torque and speed, and descriptions of the transmission geometry and the component gears and bearings. The program output file describes the transmission, its components, their capabilities, locations, and loads. It also lists the dynamic capability, ninety percent reliability, and mean life of each component and the transmission as a system. Here, the program, its input and output files, and the theory behind the operation of the program are described.

  9. Interfacing External Quantum Devices to a Universal Quantum Computer

    PubMed Central

    Lagana, Antonio A.; Lohe, Max A.; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. PMID:22216276

  10. Interfacing external quantum devices to a universal quantum computer.

    PubMed

    Lagana, Antonio A; Lohe, Max A; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. © 2011 Lagana et al.

  11. A computer system for the storage and retrieval of gravity data, Kingdom of Saudi Arabia

    USGS Publications Warehouse

    Godson, Richard H.; Andreasen, Gordon H.

    1974-01-01

    A computer system has been developed for the systematic storage and retrieval of gravity data. All pertinent facts relating to gravity station measurements and computed Bouguer values may be retrieved either by project name or by geographical coordinates. Features of the system include visual display in the form of printer listings of gravity data and printer plots of station locations. The retrieved data format interfaces with the format of GEOPAC, a system of computer programs designed for the analysis of geophysical data.

  12. Selected Streamflow Statistics for Streamgaging Stations in Delaware, 2003

    USGS Publications Warehouse

    Ries, Kernell G.

    2004-01-01

    Flow-duration and low-flow frequency statistics were calculated for 15 streamgaging stations in Delaware, in cooperation with the Delaware Geological Survey. The flow-duration statistics include the 1-, 2-, 5-, 10-, 20-, 30-, 40-, 50-, 60-, 70-, 80-, 90-, 95-, 98-, and 99-percent duration discharges. The low-flow frequency statistics include the average discharges for 1, 7, 14, 30, 60, 90, and 120 days that recur, on average, once in 1.01, 2, 5, 10, 20, 50, and 100 years. The statistics were computed using U.S. Geological Survey computer programs that can be downloaded from the World Wide Web at no cost. The computer programs automate standard U.S. Geological Survey methods for computing the statistics. Documentation is provided at the Web sites for the individual programs. The computed statistics are presented in tabular format on a separate page for each station, along with the station name, station number, the location, the period of record, and remarks.

  13. Study of the modifications needed for efficient operation of NASTRAN on the Control Data Corporation STAR-100 computer

    NASA Technical Reports Server (NTRS)

    1975-01-01

    NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.

  14. A Computational Model of Public Support for Insurgency and Terrorism: A Prototype for More-General Social-Science Modeling

    DTIC Science & Technology

    2013-01-01

    PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) RAND...prototype model illustrating concretely a new approach. The prototype model itself should be seen not as a definitive end point, but rather as a...9 1 4 8 1 4 8 1 1 1 1 2 2 1 2 2 2 8 9 2 8 9 3 9 9 2 8 8 2 8 9 2 9 9 1 1 3 1 2 4 1 2 4 xviii A Computational Model of Public Support for Insurgency

  15. How to manage continuing education and retraining programs on optical physics and laser technology at a university: Moscow State experience

    NASA Astrophysics Data System (ADS)

    Zadkov, Victor N.; Koroteev, Nikolai I.

    1995-10-01

    An experience of managing the continuing education and retraining programs at the International Laser Center (ILC) of Moscow State University is discussed. The offered programs are in a wide range of areas, namely laser physics and technology, laser biophysics and biomedicine, laser chemistry, and computers in laser physics. The attendees who are presumably scientists, engineers, technical managers, and graduate students can join these programs through the annual ILC term (6 months), individual training and research programs (up to a year), annual ILC Laser Graduate School, graduate study, and post-docs program, which are reviewed in the paper. A curriculum that includes basic and specialized courses is described in detail. A brief description of the ILC Laser Teaching and Computer Labs that support all the educational courses is given as well.

  16. The Naval Postgraduate School SECURE ARCHIVAL STORAGE SYSTEM. Part II. Segment and Process Management Implementation.

    DTIC Science & Technology

    1981-03-01

    Research Instructor of Computer Scienr-. Reviewed by: Released by: WILLIAM M. TOLLES Department puter Science Dean of Research 4c t SECURITY...Lyle A. Cox, Roger R. Schell, and Sonja L. Perdue 9. PERFORMING ORGANIZATION NAME ANO ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK AREA A WORK UNIT... Computer Networks, Operating Systems, Computer Security 20. AftUrCT (Cnthm, w v re eae old* It n..*p and idm 0 F W blk ..m.m.o’) ",A_;he security

  17. Measurement of Loneliness Among Clients Representing Four Stages of Cancer: An Exploratory Study.

    DTIC Science & Technology

    1985-03-01

    status, and membership in organizations for each client were entered into a SPSS program in a mainframe computer . The means and a one-way analysis of...Study 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(e) S. CONTRACT OR GRANT NUMBER(&) Suanne Smith 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ...27 Definitions of Terms .......... . . . . 28 II. MErODOLOGY . . . . . . . . . . ......... 30 Overviev of Design

  18. A Research Program in Computer Technology

    DTIC Science & Technology

    1976-07-01

    K PROGRAM VERIFICATION 12 [Shaw76b] Shaw, M., W. A. Wulf, and R. L. London, Abstraction and Verification ain Aiphard: Iteration and Generators...millisecond trame of speech: pitch, gain, and 10 k -parameters (often called reflection coefficients). The 12 parameters from each frame are encoded into...del rey, CA 90291 Program Code 3D30 & 3P1O I,%’POLLING OFFICE NAME AND ADDRESS 12 REPORT DATE Defense Advanced Research Projects Agency July 1976 1400

  19. Statistical Modeling of Bivariate Data.

    DTIC Science & Technology

    1982-08-01

    Technical S. PERFORMING ORG. REPORT NUMBER 7. AUTNOR(a) S. CONTRACT OR GRANT NUMBER(e) Terry Joe Woodfield DAAG29-80-C-0070 S. PERFORMING ORGANIZATION NAME...proaramming has caused it to be a widely practiced form of program construction. The idea behind this approach is to carefully organize a program so that it...flows smoothly from one computation to the next without haphazard placement of loops and branches. There are , J a variety of ways to organize a program

  20. Loci-STREAM Version 0.9

    NASA Technical Reports Server (NTRS)

    Wright, Jeffrey; Thakur, Siddharth

    2006-01-01

    Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.

  1. Fourth Year Status Report. Computerized Training Systems Project. Project ABACUS.

    DTIC Science & Technology

    1976-08-01

    in 7 9. PERFORMING ORGANIZATION NAME AND ADOMEN ,,, 10. PROGRAM ELEMENT. PROJECT , TASK US Army Tra ining Support Center A R E A S WORK UNIT NUMBERS...transp ired during the fourth year of Project ABACUS, the A rmy ’s program for the development of a Computerized Training System. It inc l udes a...have transpired durlnq the fourth year of Project ABACUS, the Army ’s program for the developmen t o~ aprototype Computer i zed Training System. It

  2. Ada (Trade Name) Compiler Validation Summary Report: Alsys Inc., AlsyCOMP 003, V3.1, Wang PC 280.

    DTIC Science & Technology

    1988-06-04

    Compiler /a tidation Capability. A set of programs that evaluates the conformity of a compiler to the Ada languaJe speci ficat.ion, AIST/MIL-STD--18... Engineering *Ada o ;; ,es~ered trademark of the United States Government (Ada Joint Program Office) A-2 "- S.! ’S APPENDIX B APPENDIX F OF THE Ada...AND ADDRESS 10. PROGRAM ELEMENT, PROJECT. TASK The National Computing Centre Limited AREA & WORK UNIT NUMBERS Manchester, UK ŕ 11. CONTROLLING OFFICE

  3. The Early Years of Molecular Dynamics and Computers at UCRL, LRL, LLL, and LLNL

    NASA Astrophysics Data System (ADS)

    Mansigh Karlsen, Mary Ann

    I'm the young woman in the picture shown in Fig. 12.1 that appeared with the invitation to the Symposium to celebrate Berni Alder's ninetieth birthday. I worked with Berni for over 25 years on the computer programs that provided the data he needed to write the fifteen papers published in scientific journals on Studies in Molecular Dynamics. My name appears at the end of each one thanking me for computer support. It has been interesting to look on the Internet to find my name in the middle of many foreign languages, including Japanese characters and Russian Cyrillic script. It shows how Berni's work has been of interest to many scientists all over the world from the earliest years. Figure 12.1 was also included with articles written when he received the National Medal of Science from President Obama in 2009…

  4. Current Lewis Turbomachinery Research: Building on our Legacy of Excellence

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1997-01-01

    This Wu Chang-Hua lecture is concerned with the development of analysis and computational capability for turbomachinery flows which is based on detailed flow field physics. A brief review of the work of Professor Wu is presented as well as a summary of the current NASA aeropropulsion programs. Two major areas of research are described in order to determine our predictive capabilities using modern day computational tools evolved from the work of Professor Wu. In one of these areas, namely transonic rotor flow, it is demonstrated that a high level of accuracy is obtainable provided sufficient geometric detail is simulated. In the second case, namely turbine heat transfer, our capability is lacking for rotating blade rows and experimental correlations will provide needed information in the near term. It is believed that continuing progress will allow us to realize the full computational potential and its impact on design time and cost.

  5. WALLY 1 ...A large, principal components regression program with varimax rotation of the factor weight matrix

    Treesearch

    James R. Wallis

    1965-01-01

    Written in Fortran IV and MAP, this computer program can handle up to 120 variables, and retain 40 principal components. It can perform simultaneous regression of up to 40 criterion variables upon the varimax rotated factor weight matrix. The columns and rows of all output matrices are labeled by six-character alphanumeric names. Data input can be from punch cards or...

  6. Automated Diversity in Computer Systems

    DTIC Science & Technology

    2005-09-01

    traces that started with trace heads , namely backwards- taken branches. These branches are indicative of loops within the program, and Dynamo assumes that...would be the ones the program would normally take. Therefore when a trace head became hot (was visited enough times), only a single code trace would...all encountered trace heads . When an interesting instruction is being emulated, the tracing code checks to see if it has been encountered before

  7. SOFTWARE DESIGN FOR REAL-TIME SYSTEMS.

    DTIC Science & Technology

    Real-time computer systems and real-time computations are defined for the purposes of this report. The design of software for real - time systems is...discussed, employing the concept that all real - time systems belong to one of two types. The types are classified according to the type of control...program used; namely: Pre-assigned Iterative Cycle and Real-time Queueing. The two types of real - time systems are described in general, with supplemental

  8. The BBN (Bolt Beranek and Newman) Knowledge Acquisition Project. Phase 1. Functional Description; Test Plan.

    DTIC Science & Technology

    1987-05-01

    Computers . " Symbolics. Inc. 8. Carnegie Group. Inc KnoiledgeCraft Carnegie Group, Inc.. 1985. .- 9. Moser, Margaret, An Overviev of NIKL. Section of BBN...ORGANIZATION NAME AND ADDRESS I0. PROGRAM ELEMENT. PROJECT. TASK BBN Laboratories Inc. AREAAWoRIUNTNUMER_ 10 Moulton St. Cambridge, MA 02238 It...knowledge representation, expert systems; strategic computing , . A 20 ABSTRACT (Contnue an r rerse ide If neceaesary and Identify by block number) This

  9. A description of the index of active Florida water data collection stations and a user's guide for station or site information retrieval using computer program Findex H578

    USGS Publications Warehouse

    Merritt, M.L.

    1977-01-01

    A computerized index of water-data collection activities and retrieval software to generate publication list of this information was developed for Florida. This system serves a vital need in the administration of the many and diverse water-data collection activities. Previously, needed data was very difficult to assemble for use in program planning or project implementation. Largely descriptive, the report tells how a file of computer card images has been established which contains entries for all sites in Florida at which there is currently a water-data-collection activity. Entries include information such as identification number, station name, location, type of site, county, information about data collection, funding, and other pertinent details. The computer program FINDEX selectively retrieves entries and lists them in a format suitable for publication. Updating the index is done routinely. (Woodard-USGS)

  10. Long wavelength propagation capacity, version 1.1 (computer diskette)

    NASA Astrophysics Data System (ADS)

    1994-05-01

    File Characteristics: software and data file. (72 files); ASCII character set. Physical Description: 2 computer diskettes; 3 1/2 in.; high density; 1.44 MB. System Requirements: PC compatible; Digital Equipment Corp. VMS; PKZIP (included on diskette). This report describes a revision of the Naval Command, Control and Ocean Surveillance Center RDT&E Division's Long Wavelength Propagation Capability (LWPC). The first version of this capability was a collection of separate FORTRAN programs linked together in operation by a command procedure written in an operating system unique to the Digital Equipment Corporation (Ferguson & Snyder, 1989a, b). A FORTRAN computer program named Long Wavelength Propagation Model (LWPM) was developed to replace the VMS control system (Ferguson & Snyder, 1990; Ferguson, 1990). This was designated version 1 (LWPC-1). This program implemented all the features of the original VMS plus a number of auxiliary programs that provided summaries of the files and graphical displays of the output files. This report describes a revision of the LWPC, designated version 1.1 (LWPC-1.1)

  11. A flexible tool for diagnosing water, energy, and entropy budgets in climate models

    NASA Astrophysics Data System (ADS)

    Lembo, Valerio; Lucarini, Valerio

    2017-04-01

    We have developed a new flexible software for studying the global energy budget, the hydrological cycle, and the material entropy production of global climate models. The program receives as input radiative, latent and sensible energy fluxes, with the requirement that the variable names are in agreement with the Climate and Forecast (CF) conventions for the production of NetCDF datasets. Annual mean maps, meridional sections and time series are computed by means of Climate Data Operators (CDO) collection of command line operators developed at Max-Planck Institute for Meteorology (MPI-M). If a land-sea mask is provided, the program also computes the required quantities separately on the continents and oceans. Depending on the user's choice, the program also calls the MATLAB software to compute meridional heat transports and location and intensities of the peaks in the two hemispheres. We are currently planning to adapt the program in order to be included in the Earth System Model eValuation Tool (ESMValTool) community diagnostics.

  12. On Undecidability Aspects of Resilient Computations and Implications to Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S

    2014-01-01

    Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classesmore » of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.« less

  13. Dockres: a computer program that analyzes the output of virtual screening of small molecules

    PubMed Central

    2010-01-01

    Background This paper describes a computer program named Dockres that is designed to analyze and summarize results of virtual screening of small molecules. The program is supplemented with utilities that support the screening process. Foremost among these utilities are scripts that run the virtual screening of a chemical library on a large number of processors in parallel. Methods Dockres and some of its supporting utilities are written Fortran-77; other utilities are written as C-shell scripts. They support the parallel execution of the screening. The current implementation of the program handles virtual screening with Autodock-3 and Autodock-4, but can be extended to work with the output of other programs. Results Analysis of virtual screening by Dockres led to both active and selective lead compounds. Conclusions Analysis of virtual screening was facilitated and enhanced by Dockres in both the authors' laboratories as well as laboratories elsewhere. PMID:20205801

  14. 3D Printing of Protein Models in an Undergraduate Laboratory: Leucine Zippers

    ERIC Educational Resources Information Center

    Meyer, Scott C.

    2015-01-01

    An upper-division undergraduate laboratory experiment is described that explores the structure/function relationship of protein domains, namely leucine zippers, through a molecular graphics computer program and physical models fabricated by 3D printing. By generating solvent accessible surfaces and color-coding hydrophobic, basic, and acidic amino…

  15. Software Engineering Basics: A Primer for the Project Manager.

    DTIC Science & Technology

    1982-06-01

    computer software (45, 46]. It is named after Ada Augusta who is generally credited as having been the first programmer as an assistant to Charles ... Babbage , and is called, appropriately enough, ADA. The development of one common programming language for tactical software clearly has the p-.tential for

  16. NESTOR: A Computer-Based Medical Diagnostic Aid That Integrates Causal and Probabilistic Knowledge.

    DTIC Science & Technology

    1984-11-01

    indiidual conditional probabilities between one cause node and its effect node, but less common to know a joint conditional probability between a...PERFOAMING ORG. REPORT NUMBER * 7. AUTI4ORs) O Gregory F. Cooper 1 CONTRACT OR GRANT NUMBERIa) ONR N00014-81-K-0004 g PERFORMING ORGANIZATION NAME AND...ADDRESS 10. PROGRAM ELEMENT, PROJECT. TASK Department of Computer Science AREA & WORK UNIT NUMBERS Stanford University Stanford, CA 94305 USA 12. REPORT

  17. Overtone Vibrations of OH Groups in Fused Silica Optical Fibers.

    DTIC Science & Technology

    1981-09-01

    MER IPEFRIGORGANIZATION- NAME AND ADDRESS - 10. PROGRAM ELEMENT, PROJECT, TS Chemitry eparmentAREA & WORK UNIT NUMBERS Howard-University Washington D...edameutal and overtone contours were decomposed into Gaussian components using a D. Post 310 analog computer . ?~ .5- 3. F..*’m,,ntal Re,.ults A. Infrared...spectrum. AoD - k AoH (here k-3.981) computed S from a linear combination of deuterated, AOD, and undeuterated, AoH, absorbance spectra, was required, Fig

  18. Computer code for charge-exchange plasma propagation

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Kaufman, H. R.

    1981-01-01

    The propagation of the charge-exchange plasma from an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ASNI Standard FORTRAN.

  19. Database Design for Personnel Management in Republic of Korea Army.

    DTIC Science & Technology

    1984-06-01

    model for performing personnel management in ROK Army. After being designed, the computer programs should be fully tested. The author’s recommendations...S. CONTRACT OR GRANT NUMERae)" Kwang Soo Baek II 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASKAREA 6 WORK UN IT...of these requirements complicate the already difficult task of providing safe and effcient access to computerized data. The designer should select an

  20. ESP (External-Stores Program) - A Pilot Computer Program for Determining Flutter-Critical External-Store Configurations. Volume 1. User’s Manual,

    DTIC Science & Technology

    1985-02-01

    li’Lii El. IE F INE ,UT 1 = K MM. * GET, NAST484/UN=SYSTEM. E(EGIN, ,NAST464. PFL, 160000, RED’UCE(-). LINKI , L~DDEDDD Figure A-I1 Typical Control-Card...initiated via Che LINKI statement, in which the second term is the input data file. The permanent file name KMDM, shown in conjunction with local file

  1. Distributed Object Oriented Programming

    DTIC Science & Technology

    1990-02-01

    of the object oriented model of computation. Therefore, object oriented programming can provide the programmer with good conceptual tools to divide his...LABOR SALES-COMMISSION). The symbol + refers to the addition function and takes any number of numeric arguments. The third subtype of list forms is the...2) ’(:SEND-DONE) (SEWF (AREF OBJECT-i1-MESSAGES-SENT 2) ’(PROGN (FORMAT T "-s methd completely executed instr-ptr -s-V NAME %INSTR-PTR%) (INCF

  2. The IAEA neutron coincidence counting (INCC) and the DEMING least-squares fitting programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krick, M.S.; Harker, W.C.; Rinard, P.M.

    1998-12-01

    Two computer programs are described: (1) the INCC (IAEA or International Neutron Coincidence Counting) program and (2) the DEMING curve-fitting program. The INCC program is an IAEA version of the Los Alamos NCC (Neutron Coincidence Counting) code. The DEMING program is an upgrade of earlier Windows{reg_sign} and DOS codes with the same name. The versions described are INCC 3.00 and DEMING 1.11. The INCC and DEMING codes provide inspectors with the software support needed to perform calibration and verification measurements with all of the neutron coincidence counting systems used in IAEA inspections for the nondestructive assay of plutonium and uranium.

  3. Bacteria as computers making computers

    PubMed Central

    Danchin, Antoine

    2009-01-01

    Various efforts to integrate biological knowledge into networks of interactions have produced a lively microbial systems biology. Putting molecular biology and computer sciences in perspective, we review another trend in systems biology, in which recursivity and information replace the usual concepts of differential equations, feedback and feedforward loops and the like. Noting that the processes of gene expression separate the genome from the cell machinery, we analyse the role of the separation between machine and program in computers. However, computers do not make computers. For cells to make cells requires a specific organization of the genetic program, which we investigate using available knowledge. Microbial genomes are organized into a paleome (the name emphasizes the role of the corresponding functions from the time of the origin of life), comprising a constructor and a replicator, and a cenome (emphasizing community-relevant genes), made up of genes that permit life in a particular context. The cell duplication process supposes rejuvenation of the machine and replication of the program. The paleome also possesses genes that enable information to accumulate in a ratchet-like process down the generations. The systems biology must include the dynamics of information creation in its future developments. PMID:19016882

  4. Bacteria as computers making computers.

    PubMed

    Danchin, Antoine

    2009-01-01

    Various efforts to integrate biological knowledge into networks of interactions have produced a lively microbial systems biology. Putting molecular biology and computer sciences in perspective, we review another trend in systems biology, in which recursivity and information replace the usual concepts of differential equations, feedback and feedforward loops and the like. Noting that the processes of gene expression separate the genome from the cell machinery, we analyse the role of the separation between machine and program in computers. However, computers do not make computers. For cells to make cells requires a specific organization of the genetic program, which we investigate using available knowledge. Microbial genomes are organized into a paleome (the name emphasizes the role of the corresponding functions from the time of the origin of life), comprising a constructor and a replicator, and a cenome (emphasizing community-relevant genes), made up of genes that permit life in a particular context. The cell duplication process supposes rejuvenation of the machine and replication of the program. The paleome also possesses genes that enable information to accumulate in a ratchet-like process down the generations. The systems biology must include the dynamics of information creation in its future developments.

  5. Nonlinear Wave Simulation on the Xeon Phi Knights Landing Processor

    NASA Astrophysics Data System (ADS)

    Hristov, Ivan; Goranov, Goran; Hristova, Radoslava

    2018-02-01

    We consider an interesting from computational point of view standing wave simulation by solving coupled 2D perturbed Sine-Gordon equations. We make an OpenMP realization which explores both thread and SIMD levels of parallelism. We test the OpenMP program on two different energy equivalent Intel architectures: 2× Xeon E5-2695 v2 processors, (code-named "Ivy Bridge-EP") in the Hybrilit cluster, and Xeon Phi 7250 processor (code-named "Knights Landing" (KNL). The results show 2 times better performance on KNL processor.

  6. Modeling of Cross-Plane Interface Thermal Conductance Between Graphene Nano-Ribbons (Postprint)

    DTIC Science & Technology

    2014-09-19

    Mater. Interfaces 5 2599–603 [39] Hu M and Poulikakos D 2013 Int. J. Heat Mass Tranfer 62 205–13 [40] Plimpton S 1995 J. Comput. Phys. 117 1–19 [41...5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR( S ) (see back) 5d. PROJECT NUMBER 2305 5e. TASK NUMBER 5f. WORK UNIT NUMBER X091 7...PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) (see back) 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING / MONITORING AGENCY NAME( S ) AND

  7. Graphics processing unit based computation for NDE applications

    NASA Astrophysics Data System (ADS)

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.

    2012-05-01

    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  8. CAL--ERDA program manual. [Building Design Language; LOADS, SYSTEMS, PLANT, ECONOMICS, REPORT, EXECUTIVE, CAL-ERDA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunn, B. D.; Diamond, S. C.; Bennett, G. A.

    1977-10-01

    A set of computer programs, called Cal-ERDA, is described that is capable of rapid and detailed analysis of energy consumption in buildings. A new user-oriented input language, named the Building Design Language (BDL), has been written to allow simplified manipulation of the many variables used to describe a building and its operation. This manual provides the user with information necessary to understand in detail the Cal-ERDA set of computer programs. The new computer programs described include: an EXECUTIVE Processor to create computer system control commands; a BDL Processor to analyze input instructions, execute computer system control commands, perform assignments andmore » data retrieval, and control the operation of the LOADS, SYSTEMS, PLANT, ECONOMICS, and REPORT programs; a LOADS analysis program that calculates peak (design) zone and hourly loads and the effect of the ambient weather conditions, the internal occupancy, lighting, and equipment within the building, as well as variations in the size, location, orientation, construction, walls, roofs, floors, fenestrations, attachments (awnings, balconies), and shape of a building; a Heating, Ventilating, and Air-Conditioning (HVAC) SYSTEMS analysis program capable of modeling the operation of HVAC components including fans, coils, economizers, humidifiers, etc.; 16 standard configurations and operated according to various temperature and humidity control schedules. A plant equipment program models the operation of boilers, chillers, electrical generation equipment (diesel or turbines), heat storage apparatus (chilled or heated water), and solar heating and/or cooling systems. An ECONOMIC analysis program calculates life-cycle costs. A REPORT program produces tables of user-selected variables and arranges them according to user-specified formats. A set of WEATHER ANALYSIS programs manipulates, summarizes and plots weather data. Libraries of weather data, schedule data, and building data were prepared.« less

  9. 77 FR 1728 - Privacy Act of 1974; Publication of Five New Systems of Records; Amendments to Five Existing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-11

    ... assistance to correspondents; to use Web site based programs; to provide usage statistics associated with the... of individuals for surveys. Among other things, maintaining the names, addresses, etc. of individuals... information in the system. Safeguards: Access by authorized personnel only. Computer security safeguards are...

  10. Virtual Frog Dissection Kit Version 2.2

    Science.gov Websites

    Virtual Frog Dissection Kit This award-winning interactive program is part of the "Whole Frog " project. You can interactively dissect a (digitized) frog named Fluffy, and play the Virtual Frog animals other than the frog that have a computer-graphics based virtual dissection page. We get frequent

  11. A Congeries of Numerical Models used at the BRL

    DTIC Science & Technology

    1980-09-01

    3701 and 4773. 92 1. TASK AREA/DISCIPLINE: Exterior ballistics. 2. MODEL NAME: Six degree of freedom trajectory model. REFERENCE(S): BRL Report...B003132L) UCRL 51179, June 1972, "KDFOC: A computer Program to Calculate Fallout from Underground and Land Surface Nuclear Explosions," (U) J. B. Know, et

  12. User’s guide to SNAP for ArcGIS® :ArcGIS interface for scheduling and network analysis program

    Treesearch

    Woodam Chung; Dennis Dykstra; Fred Bower; Stephen O’Brien; Richard Abt; John. and Sessions

    2012-01-01

    This document introduces a computer software named SNAP for ArcGIS® , which has been developed to streamline scheduling and transportation planning for timber harvest areas. Using modern optimization techniques, it can be used to spatially schedule timber harvest with consideration of harvesting costs, multiple products, alternative...

  13. The extinct animal show: the paleoimagery tradition and computer generated imagery in factual television programs.

    PubMed

    Campbell, Vincent

    2009-03-01

    Extinct animals have always been popular subjects for the media, in both fiction, and factual output. In recent years, a distinctive new type of factual television program has emerged in which computer generated imagery is used extensively to bring extinct animals back to life. Such has been the commercial audience success of these programs that they have generated some public and academic debates about their relative status as science, documentary, and entertainment, as well as about their reflection of trends in factual television production, and the aesthetic tensions in the application of new media technologies. Such discussions ignore a crucial contextual feature of computer generated extinct animal programs, namely the established tradition of paleoimagery. This paper examines a selection of extinct animal shows in terms of the dominant frames of the paleoimagery genre. The paper suggests that such an examination has two consequences. First, it allows for a more context-sensitive evaluation of extinct animal programs, acknowledging rather than ignoring relevant representational traditions. Second, it allows for an appraisal and evaluation of public and critical reception of extinct animal programs above and beyond the traditional debates about tensions between science, documentary, entertainment, and public understanding.

  14. Revised description of index of Florida water data collection active stations and a user's guide for station or site information retrieval computer program FINDEX H578

    USGS Publications Warehouse

    Geiger, Linda H.

    1983-01-01

    The report is an update of U.S. Geological Survey Open-File Report 77-703, which described a retrieval program for administrative index of active data-collection sites in Florida. Extensive changes to the Findex system have been made since 1977 , making the previous report obsolete. A description of the data base and computer programs that are available in the Findex system are documented in this report. This system serves a vital need in the administration of the many and diverse water-data collection activities. District offices with extensive data-collection activities will benefit from the documentation of the system. Largely descriptive, the report tells how a file of computer card images has been established which contains entries for all sites in Florida at which there is currently a water-data collection activity. Entries include information such as identification number, station name, location, type of site, county, frequency of data collection, funding, and other pertinent details. The computer program FINDEX selectively retrieves entries and lists them in a format suitable for publication. The index is updated routinely. (USGS)

  15. High Temperature Composite Analyzer (HITCAN) demonstration manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Singhal, S. N; Lackney, J. J.; Murthy, P. L. N.

    1993-01-01

    This manual comprises a variety of demonstration cases for the HITCAN (HIgh Temperature Composite ANalyzer) code. HITCAN is a general purpose computer program for predicting nonlinear global structural and local stress-strain response of arbitrarily oriented, multilayered high temperature metal matrix composite structures. HITCAN is written in FORTRAN 77 computer language and has been configured and executed on the NASA Lewis Research Center CRAY XMP and YMP computers. Detailed description of all program variables and terms used in this manual may be found in the User's Manual. The demonstration includes various cases to illustrate the features and analysis capabilities of the HITCAN computer code. These cases include: (1) static analysis, (2) nonlinear quasi-static (incremental) analysis, (3) modal analysis, (4) buckling analysis, (5) fiber degradation effects, (6) fabrication-induced stresses for a variety of structures; namely, beam, plate, ring, shell, and built-up structures. A brief discussion of each demonstration case with the associated input data file is provided. Sample results taken from the actual computer output are also included.

  16. PLASIM: A computer code for simulating charge exchange plasma propagation

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.

    1982-01-01

    The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.

  17. A generalized global alignment algorithm.

    PubMed

    Huang, Xiaoqiu; Chao, Kun-Mao

    2003-01-22

    Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.

  18. Marshal Wrubel and the Electronic Computer as an Astronomical Instrument

    NASA Astrophysics Data System (ADS)

    Mutschlecner, J. P.; Olsen, K. H.

    1998-05-01

    In 1960, Marshal H. Wrubel, professor of astrophysics at Indiana University, published an influential review paper under the title, "The Electronic Computer as an Astronomical Instrument." This essay pointed out the enormous potential of the electronic computer as an instrument of observational and theoretical research in astronomy, illustrated programming concepts, and made specific recommendations for the increased use of computers in astronomy. He noted that, with a few scattered exceptions, computer use by the astronomical community had heretofore been "timid and sporadic." This situation was to improve dramatically in the next few years. By the late 1950s, general-purpose, high-speed, "mainframe" computers were just emerging from the experimental, developmental stage, but few were affordable by or available to academic and research institutions not closely associated with large industrial or national defense programs. Yet by 1960 Wrubel had spent a decade actively pioneering and promoting the imaginative application of electronic computation within the astronomical community. Astronomy upper-level undergraduate and graduate students at Indiana were introduced to computing, and Ph.D. candidates who he supervised applied computer techniques to problems in theoretical astrophysics. He wrote an early textbook on programming, taught programming classes, and helped establish and direct the Research Computing Center at Indiana, later named the Wrubel Computing Center in his honor. He and his students created a variety of algorithms and subroutines and exchanged these throughout the astronomical community by distributing the Astronomical Computation News Letter. Nationally as well as internationally, Wrubel actively cooperated with other groups interested in computing applications for theoretical astrophysics, often through his position as secretary of the IAU commission on Stellar Constitution.

  19. TWINTN4: A program for transonic four-wall interference assessment in two-dimensional wind tunnels

    NASA Technical Reports Server (NTRS)

    Kemp, W. B., Jr.

    1984-01-01

    A method for assessing the wall interference in transonic two-dimensional wind tunnel tests including the effects of the tunnel sidewall boundary layer was developed and implemented in a computer program named TWINTN4. The method involves three successive solutions of the transonic small disturbance potential equation to define the wind tunnel flow, the equivalent free air flow around the model, and the perturbation attributable to the model. Required input includes pressure distributions on the model and along the top and bottom tunnel walls which are used as boundary conditions for the wind tunnel flow. The wall-induced perturbation field is determined as the difference between the perturbation in the tunnel flow solution and the perturbation attributable to the model. The methodology used in the program is described and detailed descriptions of the computer program input and output are presented. Input and output for a sample case are given.

  20. Parallel computation and the Basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1992-12-16

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to-use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communication costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis and Parallelmore » Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  1. Parallel computation and the basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1993-05-01

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  2. A computer program for borehole compensation of dual-detector density well logs

    USGS Publications Warehouse

    Scott, James Henry

    1978-01-01

    The computer program described in this report was developed for applying a borehole-rugosity and mudcake compensation algorithm to dual-density logs using the following information: the water level in the drill hole, hole diameter (from a caliper log if available, or the nominal drill diameter if not), and the two gamma-ray count rate logs from the near and far detectors of the density probe. The equations that represent the compensation algorithm and the calibration of the two detectors (for converting countrate or density) were derived specifically for a probe manufactured by Comprobe Inc. (5.4 cm O.D. dual-density-caliper); they are not applicable to other probes. However, equivalent calibration and compensation equations can be empirically determined for any other similar two-detector density probes and substituted in the computer program listed in this report. * Use of brand names in this report does not necessarily constitute endorsement by the U.S. Geological Survey.

  3. Do Global Cities Enable Global Views? Using Twitter to Quantify the Level of Geographical Awareness of U.S. Cities.

    PubMed

    Han, Su Yeon; Tsou, Ming-Hsiang; Clarke, Keith C

    2015-01-01

    Dynamic social media content, such as Twitter messages, can be used to examine individuals' beliefs and perceptions. By analyzing Twitter messages, this study examines how Twitter users exchanged and recognized toponyms (city names) for different cities in the United States. The frequency and variety of city names found in their online conversations were used to identify the unique spatiotemporal patterns of "geographical awareness" for Twitter users. A new analytic method, Knowledge Discovery in Cyberspace for Geographical Awareness (KDCGA), is introduced to help identify the dynamic spatiotemporal patterns of geographic awareness among social media conversations. Twitter data were collected across 50 U.S. cities. Thousands of city names around the world were extracted from a large volume of Twitter messages (over 5 million tweets) by using the Twitter Application Programming Interface (APIs) and Python language computer programs. The percentages of distant city names (cities located in distant states or other countries far away from the locations of Twitter users) were used to estimate the level of global geographical awareness for Twitter users in each U.S. city. A Global awareness index (GAI) was developed to quantify the level of geographical awareness of Twitter users from within the same city. Our findings are that: (1) the level of geographical awareness varies depending on when and where Twitter messages are posted, yet Twitter users from big cities are more aware of the names of international cities or distant US cities than users from mid-size cities; (2) Twitter users have an increased awareness of other city names far away from their home city during holiday seasons; and (3) Twitter users are more aware of nearby city names than distant city names, and more aware of big city names rather than small city names.

  4. Galileo Teacher Training Program - MoonDays

    NASA Astrophysics Data System (ADS)

    Heenatigala, T.; Doran, R.

    2012-09-01

    Moon is an excellent tool for classroom education. Many teachers fail to implement lunar science in classroom at several levels though - lack of guidance, finding the right materials, and implanting lessons in the school curriculum - just to name a few. To overcome this need, Galileo Teacher Training Program (GTTP) [1] present MoonDays, a resource guide for teachers globally which can be used both in and out of classroom. GTTP MoonDays includes scientific knowledge, hands-on activities, computing skills, creativity and disability based lesson plans.

  5. Layered recognition networks that pre-process, classify, and describe

    NASA Technical Reports Server (NTRS)

    Uhr, L.

    1971-01-01

    A brief overview is presented of six types of pattern recognition programs that: (1) preprocess, then characterize; (2) preprocess and characterize together; (3) preprocess and characterize into a recognition cone; (4) describe as well as name; (5) compose interrelated descriptions; and (6) converse. A computer program (of types 3 through 6) is presented that transforms and characterizes the input scene through the successive layers of a recognition cone, and then engages in a stylized conversation to describe the scene.

  6. Beam Generated Vorticity and Convective Channel Mixing.

    DTIC Science & Technology

    1980-09-17

    one . PERFORMING ORGANIZATION NAME AND ADDRESS ,0. PROGRAM ELEMENT. PROJECT. TASK Laboratory for Computational Physics Naval Research Laboratory.- 62...profile, Eq. (21). Letting the integration variable be q- riR. yields If d’n. g(SO[71b + ci) V(r) - U 2 1 a dq (1 + 7 -a2 )2 fd 77 g ( Soic -7b)- g

  7. Crystallographic and general use programs for the XDS Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Snyder, R. L.

    1973-01-01

    Programs in basic FORTRAN 4 are described, which fall into three catagories: (1) interactive programs to be executed under time sharing (BTM); (2) non interactive programs which are executed in batch processing mode (BPM); and (3) large non interactive programs which require more memory than is available in the normal BPM/BTM operating system and must be run overnight on a special system called XRAY which releases about 45,000 words of memory to the user. Programs in catagories (1) and (2) are stored as FORTRAN source files in the account FSNYDER. Programs in catagory (3) are stored in the XRAY system as load modules. The type of file in account FSNYDER is identified by the first two letters in the name.

  8. Digital computer program for nuclear reactor design water properties (LWBR Development Program)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynn, L.L.

    1967-07-01

    An edit program MO899 for the tabulation of thermodynamic and transport properties of liquid and vapor water, frequently used in design calculations for pressurized water nuclear reactors, is described. The data tabulated are obtained from a FORTRAN IV subroutine named HOH. Values of enthalpy, specific volume, viscosity, and thermal conductivity are given for the following ranges: pressure from one bar (14.5 psia) to 175 bars (2538 psia) and temperature from as much as 320 deg C (608 deg F) below saturation up to 500 deg C (932 deg F) above saturation. (NSA 21: 38472)

  9. EEOC sues freight company for inducing worker with HIV to quit. Equal Employment Opportunity Commission.

    PubMed

    1998-04-17

    The Equal Employment Opportunity Commission (EEOC) sued a Japanese freight company, Nippon Express USA, for putting [name removed] into a do-nothing job and forcing him to quit. [Name removed] revealed his HIV status to a supervisor in 1990 while working as a trainer and customer troubleshooter. When his management changed shortly afterwards, he was transferred from Chicago to Indianapolis, presumably for a promotion. However, his responsibilities were reduced, his telephone and computer were taken away from him, and co-workers were ordered not to speak to him. Following surgery, a supervisor told [name removed] that his appearance was causing problems. The lawsuit was filed under the Americans with Disabilities Act (ADA) and seeks unspecified damages, training programs for managers and staff in dealing with HIV-positive employees, and guaranteed health care coverage for the rest of [name removed]'s life.

  10. Toward high-efficiency and detailed Monte Carlo simulation study of the granular flow spallation target

    NASA Astrophysics Data System (ADS)

    Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei

    2018-02-01

    The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.

  11. Computer program for calculating full potential transonic, quasi-three-dimensional flow through a rotating turbomachinery blade row

    NASA Technical Reports Server (NTRS)

    Farrell, C. A.

    1982-01-01

    A fast, reliable computer code is described for calculating the flow field about a cascade of arbitrary two dimensional airfoils. The method approximates the three dimensional flow in a turbomachinery blade row by correcting for stream tube convergence and radius change in the throughflow direction. A fully conservative solution of the full potential equation is combined with the finite volume technique on a body-fitted periodic mesh, with an artificial density imposed in the transonic region to insure stability and the capture of shock waves. The instructions required to set up and use the code are included. The name of the code is QSONIC. A numerical example is also given to illustrate the output of the program.

  12. The 'Biologically-Inspired Computing' Column

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike

    2006-01-01

    The field of Biology changed dramatically in 1953, with the determination by Francis Crick and James Dewey Watson of the double helix structure of DNA. This discovery changed Biology for ever, allowing the sequencing of the human genome, and the emergence of a "new Biology" focused on DNA, genes, proteins, data, and search. Computational Biology and Bioinformatics heavily rely on computing to facilitate research into life and development. Simultaneously, an understanding of the biology of living organisms indicates a parallel with computing systems: molecules in living cells interact, grow, and transform according to the "program" dictated by DNA. Moreover, paradigms of Computing are emerging based on modelling and developing computer-based systems exploiting ideas that are observed in nature. This includes building into computer systems self-management and self-governance mechanisms that are inspired by the human body's autonomic nervous system, modelling evolutionary systems analogous to colonies of ants or other insects, and developing highly-efficient and highly-complex distributed systems from large numbers of (often quite simple) largely homogeneous components to reflect the behaviour of flocks of birds, swarms of bees, herds of animals, or schools of fish. This new field of "Biologically-Inspired Computing", often known in other incarnations by other names, such as: Autonomic Computing, Pervasive Computing, Organic Computing, Biomimetics, and Artificial Life, amongst others, is poised at the intersection of Computer Science, Engineering, Mathematics, and the Life Sciences. Successes have been reported in the fields of drug discovery, data communications, computer animation, control and command, exploration systems for space, undersea, and harsh environments, to name but a few, and augur much promise for future progress.

  13. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    PubMed

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  14. Stochastic Process Creation

    NASA Astrophysics Data System (ADS)

    Esparza, Javier

    In many areas of computer science entities can “reproduce”, “replicate”, or “create new instances”. Paramount examples are threads in multithreaded programs, processes in operating systems, and computer viruses, but many others exist: procedure calls create new incarnations of the callees, web crawlers discover new pages to be explored (and so “create” new tasks), divide-and-conquer procedures split a problem into subproblems, and leaves of tree-based data structures become internal nodes with children. For lack of a better name, I use the generic term systems with process creation to refer to all these entities.

  15. Computer program for the calculation of grain size statistics by the method of moments

    USGS Publications Warehouse

    Sawyer, Michael B.

    1977-01-01

    A computer program is presented for a Hewlett-Packard Model 9830A desk-top calculator (1) which calculates statistics using weight or point count data from a grain-size analysis. The program uses the method of moments in contrast to the more commonly used but less inclusive graphic method of Folk and Ward (1957). The merits of the program are: (1) it is rapid; (2) it can accept data in either grouped or ungrouped format; (3) it allows direct comparison with grain-size data in the literature that have been calculated by the method of moments; (4) it utilizes all of the original data rather than percentiles from the cumulative curve as in the approximation technique used by the graphic method; (5) it is written in the computer language BASIC, which is easily modified and adapted to a wide variety of computers; and (6) when used in the HP-9830A, it does not require punching of data cards. The method of moments should be used only if the entire sample has been measured and the worker defines the measured grain-size range. (1) Use of brand names in this paper does not imply endorsement of these products by the U.S. Geological Survey.

  16. Improve Problem Solving Skills through Adapting Programming Tools

    NASA Technical Reports Server (NTRS)

    Shaykhian, Linda H.; Shaykhian, Gholam Ali

    2007-01-01

    There are numerous ways for engineers and students to become better problem-solvers. The use of command line and visual programming tools can help to model a problem and formulate a solution through visualization. The analysis of problem attributes and constraints provide insight into the scope and complexity of the problem. The visualization aspect of the problem-solving approach tends to make students and engineers more systematic in their thought process and help them catch errors before proceeding too far in the wrong direction. The problem-solver identifies and defines important terms, variables, rules, and procedures required for solving a problem. Every step required to construct the problem solution can be defined in program commands that produce intermediate output. This paper advocates improved problem solving skills through using a programming tool. MatLab created by MathWorks, is an interactive numerical computing environment and programming language. It is a matrix-based system that easily lends itself to matrix manipulation, and plotting of functions and data. MatLab can be used as an interactive command line or a sequence of commands that can be saved in a file as a script or named functions. Prior programming experience is not required to use MatLab commands. The GNU Octave, part of the GNU project, a free computer program for performing numerical computations, is comparable to MatLab. MatLab visual and command programming are presented here.

  17. SSL - THE SIMPLE SOCKETS LIBRARY

    NASA Technical Reports Server (NTRS)

    Campbell, C. E.

    1994-01-01

    The Simple Sockets Library (SSL) allows C programmers to develop systems of cooperating programs using Berkeley streaming Sockets running under the TCP/IP protocol over Ethernet. The SSL provides a simple way to move information between programs running on the same or different machines and does so with little overhead. The SSL can create three types of Sockets: namely a server, a client, and an accept Socket. The SSL's Sockets are designed to be used in a fashion reminiscent of the use of FILE pointers so that a C programmer who is familiar with reading and writing files will immediately feel comfortable with reading and writing with Sockets. The SSL consists of three parts: the library, PortMaster, and utilities. The user of the SSL accesses it by linking programs to the SSL library. The PortMaster initializes connections between clients and servers. The PortMaster also supports a "firewall" facility to keep out socket requests from unapproved machines. The "firewall" is a file which contains Internet addresses for all approved machines. There are three utilities provided with the SSL. SKTDBG can be used to debug programs that make use of the SSL. SPMTABLE lists the servers and port numbers on requested machine(s). SRMSRVR tells the PortMaster to forcibly remove a server name from its list. The package also includes two example programs: multiskt.c, which makes multiple accepts on one server, and sktpoll.c, which repeatedly attempts to connect a client to some server at one second intervals. SSL is a machine independent library written in the C-language for computers connected via Ethernet using the TCP/IP protocol. It has been successfully compiled and implemented on a variety of platforms, including Sun series computers running SunOS, DEC VAX series computers running VMS, SGI computers running IRIX, DECstations running ULTRIX, DEC alpha AXPs running OSF/1, IBM RS/6000 computers running AIX, IBM PC and compatibles running BSD/386 UNIX and HP Apollo 3000/4000/9000/400T computers running HP-UX. SSL requires 45K of RAM to run under SunOS and 80K of RAM to run under VMS. For use on IBM PC series computers and compatibles running DOS, SSL requires Microsoft C 6.0 and the Wollongong TCP/IP package. Source code for sample programs and debugging tools are provided. The documentation is available on the distribution medium in TeX and PostScript formats. The standard distribution medium for SSL is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 5.25 inch 360K MS-DOS format diskette. The SSL was developed in 1992 and was updated in 1993.

  18. Tilt changes of short duration

    USGS Publications Warehouse

    McHugh, Stuart

    1976-01-01

    Section I of this report contains a classification scheme for short period tilt data. For convenience, all fluctuations in the local tilt field of less than 24 hours duration will be designated SP (i.e., short period) tilt events. Three basic categories of waveshape appearance are defined, and the rules for naming the waveforms are outlined. Examples from tilt observations at four central California sites are provided. Section II contains some coseismic tilt data. Fourteen earthquakes in central California, ranging in magnitude from 2.9 to 5.2, were chosen for study on four tiltmeters within 10 source dimensions of the epicenters. The raw records from each of the four tiltmeters at the times of the earthquakes were photographed and are presented in this section. Section III contains documentation of computer programs used in the analysis of the short period tilt data. Program VECTOR computes the difference vector of a tilt event and displays the sequence of events as a head-to-tail vector plot. Program ONSTSP 1) requires two component digitized tilt data as input, 2) scales and plots the data, and 3) computes and displays the amplitude, azimuth, and normalized derivative of the tilt amplitude. Program SHARPS computes the onset sharpness, (i.e., the normalized derivative of the tilt amplitude at the onset of the tilt event) as a function of source-station distance from a model of creep-related tilt changes. Program DSPLAY plots the digitized data.

  19. Update of Aircraft Profile Data for the Integrated Noise Model Computer Program. Volume 3. Appendix B: Aircraft Performance Coefficients

    DTIC Science & Technology

    1992-03-01

    00 Name: LEAR 36/ TFE731 -2 ID: A/C Number 054 AC Type: JGA 2 ENGINES Rated Power: 3500 LB TOFLAP B C RT 1 20 0.438030E-01 0.105985E+01 0.1082240 2 10 0...00 0.OOOOOOE+00 O.OOOOOOE+00 O.OOOOOOE+00 B-28 Name: ASTRA 1125/ TFE731 -3A ID: A/C Number 062 AC Type: JGA 2 ENGINES Rated Power: 3700 LB TOFLAP B C RT...0.OOOOOOE+00 0.OOOOOOE+O0 0.OOOOOOE+00 Kla Klb K2 K3 0.OOOOOOE+00 O.O00000E+00 0.OOOOOOE+00 0.OOOOOOE+00 Name: CIT 3/ TFE731 -3-100S ID: A/C Number 095

  20. Advances in Integrating Autonomy with Acoustic Communications for Intelligent Networks of Marine Robots

    DTIC Science & Technology

    2013-02-01

    Sonar AUV #Environmental Sampling Environmental AUV +name : string = OEX Ocean Explorer +name : string = Hammerhead Iver2 +name : string = Unicorn ...executable» Google Earth Bluefin 21 AUV ( Unicorn ) MOOS Computer GPS «serial» Bluefin 21 AUV (Macrura) MOOS Computer «acoustic» Micro-Modem «wired...Computer Bluefin 21 AUV ( Unicorn ) MOOS Computer NURC AUV (OEX) MOOS Computer Topside MOOS Computer «wifi» 5.0GHz WiLan «acoustic» Edgetech GPS

  1. Exploring the Relationship between Purpose of Computer Usage and Reading Skills of Turkish Students: Evidence from PISA 2006

    ERIC Educational Resources Information Center

    Gumus, Sedat; Atalmis, Erkan Hasan

    2011-01-01

    Organization for Economic Co-Operation and Development (OECD) has conducted a series of educational assessments in many OECD and non-OECD countries to support their sustainable economic growth since 2000. These assessments are named Program for International Student Achievement (PISA); they focus on the capabilities of 15-year olds in three main…

  2. Computing Visible-Surface Representations,

    DTIC Science & Technology

    1985-03-01

    Terzopoulos N00014-75-C-0643 9. PERFORMING ORGANIZATION NAME AMC ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Artificial Inteligence Laboratory AREA A...Massachusetts Institute of lechnolog,. Support lbr the laboratory’s Artificial Intelligence research is provided in part by the Advanced Rtccarcl Proj...dynamically maintaining visible surface representations. Whether the intention is to model human vision or to design competent artificial vision systems

  3. Statistical Tools for Determining Fitness to Fly

    DTIC Science & Technology

    1981-09-01

    program. (a) Number of Cards in file: 13 (b) Layout of Card 1: iIi Field Length a•e. Variable 1 8 Real EFAIL : Average # of failures for size of control...Method Compute Survival Probability and Frequency Tables 4-4 END 25P FLUW CHARTS 27 QSTART Input EFAIL ,CYEAR,NVAR,NAV,XINC BB~i i=1,3 name (1) i=1,4 Call

  4. Water-use computer programs for Florida

    USGS Publications Warehouse

    Geiger, L.H.

    1984-01-01

    Using U.S. Geological Survey computer programs L149-L153, this report shows how to process water-use data for the functional water-use categories: public supply, rural supply, industrial self-supplied, irrigation, and thermo-electric power generation. The programs are used to selectively retrieve entries and list them in a format suitable for publication. Instructions are given for coding cards to produce tables of water-use data for each of the functional use categories. These cards contain entries that identify a particular water-use data-collection site in Florida. Entries on the cards include location information such as county code, water management district code, hydrologic unit code, and, where applicable, a site name and number. Annual and monthly pumpage is included. These entries are shown with several different headings; for example, surface water or ground water, freshwater or saline pumpages, or consumptive use. All the programs use a similar approach; however, the actual programs differ with each functional water-use category and are discussed separately. Data prepared for these programs can also be processed by the National Water-Use Data System. (USGS)

  5. Computer Program for Point Location And Calculation of ERror (PLACER)

    USGS Publications Warehouse

    Granato, Gregory E.

    1999-01-01

    A program designed for point location and calculation of error (PLACER) was developed as part of the Quality Assurance Program of the Federal Highway Administration/U.S. Geological Survey (USGS) National Data and Methodology Synthesis (NDAMS) review process. The program provides a standard method to derive study-site locations from site maps in highwayrunoff, urban-runoff, and other research reports. This report provides a guide for using PLACER, documents methods used to estimate study-site locations, documents the NDAMS Study-Site Locator Form, and documents the FORTRAN code used to implement the method. PLACER is a simple program that calculates the latitude and longitude coordinates of one or more study sites plotted on a published map and estimates the uncertainty of these calculated coordinates. PLACER calculates the latitude and longitude of each study site by interpolating between the coordinates of known features and the locations of study sites using any consistent, linear, user-defined coordinate system. This program will read data entered from the computer keyboard and(or) from a formatted text file, and will write the results to the computer screen and to a text file. PLACER is readily transferable to different computers and operating systems with few (if any) modifications because it is written in standard FORTRAN. PLACER can be used to calculate study site locations in latitude and longitude, using known map coordinates or features that are identifiable in geographic information data bases such as USGS Geographic Names Information System, which is available on the World Wide Web.

  6. State-Chart Autocoder

    NASA Technical Reports Server (NTRS)

    Clark, Kenneth; Watney, Garth; Murray, Alexander; Benowitz, Edward

    2007-01-01

    A computer program translates Unified Modeling Language (UML) representations of state charts into source code in the C, C++, and Python computing languages. ( State charts signifies graphical descriptions of states and state transitions of a spacecraft or other complex system.) The UML representations constituting the input to this program are generated by using a UML-compliant graphical design program to draw the state charts. The generated source code is consistent with the "quantum programming" approach, which is so named because it involves discrete states and state transitions that have features in common with states and state transitions in quantum mechanics. Quantum programming enables efficient implementation of state charts, suitable for real-time embedded flight software. In addition to source code, the autocoder program generates a graphical-user-interface (GUI) program that, in turn, generates a display of state transitions in response to events triggered by the user. The GUI program is wrapped around, and can be used to exercise the state-chart behavior of, the generated source code. Once the expected state-chart behavior is confirmed, the generated source code can be augmented with a software interface to the rest of the software with which the source code is required to interact.

  7. A Generalized Escape System Simulation (GESS) Computer Program. Volume 2. GESS Programmer’s Manual. Version II.

    DTIC Science & Technology

    1984-04-01

    Directorate (Code 6032) V NAVAL AIR DEVELOPMENT CENTER Warminster, PA 18974 and I David A. Fender KETRON. INC. Warminster, PA 18974 DTlC APRIL 1984 ELECTE FINAL...A. D’Aulerio N62269-81-Z-0206 David A. Fender Task No. 630-1944 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMEN1T PROJECT, TASKAREA A...0102LF01401UNCLASSIFIED SECURITY CLAWFICATION OF TNIS PAGE (011t1 Die pewed) UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE w JIMu D#& Ent:ed) 9. Continued Louis A

  8. Finite element analysis of periodic transonic flow problems

    NASA Technical Reports Server (NTRS)

    Fix, G. J.

    1978-01-01

    Flow about an oscillating thin airfoil in a transonic stream was considered. It was assumed that the flow field can be decomposed into a mean flow plus a periodic perturbation. On the surface of the airfoil the usual Neumman conditions are imposed. Two computer programs were written, both using linear basis functions over triangles for the finite element space. The first program uses a banded Gaussian elimination solver to solve the matrix problem, while the second uses an iterative technique, namely SOR. The only results obtained are for an oscillating flat plate.

  9. Manifest: A computer program for 2-D flow modeling in Stirling machines

    NASA Technical Reports Server (NTRS)

    Gedeon, David

    1989-01-01

    A computer program named Manifest is discussed. Manifest is a program one might want to use to model the fluid dynamics in the manifolds commonly found between the heat exchangers and regenerators of Stirling machines; but not just in the manifolds - in the regenerators as well. And in all sorts of other places too, such as: in heaters or coolers, or perhaps even in cylinder spaces. There are probably nonStirling uses for Manifest also. In broad strokes, Manifest will: (1) model oscillating internal compressible laminar fluid flow in a wide range of two-dimensional regions, either filled with porous materials or empty; (2) present a graphics-based user-friendly interface, allowing easy selection and modification of region shape and boundary condition specification; (3) run on a personal computer, or optionally (in the case of its number-crunching module) on a supercomputer; and (4) allow interactive examination of the solution output so the user can view vector plots of flow velocity, contour plots of pressure and temperature at various locations and tabulate energy-related integrals of interest.

  10. AdaNET phase 0 support for the AdaNET Dynamic Software Inventory (DSI) management system prototype. Catalog of available reusable software components

    NASA Technical Reports Server (NTRS)

    Hanley, Lionel

    1989-01-01

    The Ada Software Repository is a public-domain collection of Ada software and information. The Ada Software Repository is one of several repositories located on the SIMTEL20 Defense Data Network host computer at White Sands Missile Range, and available to any host computer on the network since 26 November 1984. This repository provides a free source for Ada programs and information. The Ada Software Repository is divided into several subdirectories. These directories are organized by topic, and their names and a brief overview of their topics are contained. The Ada Software Repository on SIMTEL20 serves two basic roles: to promote the exchange and use (reusability) of Ada programs and tools (including components) and to promote Ada education.

  11. [AERA. Dream machines and computing practices at the Mathematical Center].

    PubMed

    Alberts, Gerard; De Beer, Huub T

    2008-01-01

    Dream machines may be just as effective as the ones materialised. Their symbolic thrust can be quite powerful. The Amsterdam 'Mathematisch Centrum' (Mathematical Center), founded February 11, 1946, created a Computing Department in an effort to realise its goal of serving society. When Aad van Wijngaarden was appointed as head of the Computing Department, however, he claimed space for scientific research and computer construction, next to computing as a service. Still, the computing service following the five stage style of Hartree's numerical analysis remained a dominant characteristic of the work of the Computing Department. The high level of ambition held by Aad van Wijngaarden lead to ever renewed projections of big automatic computers, symbolised by the never-built AERA. Even a machine that was actually constructed, the ARRA which followed A.D. Booth's design of the ARC, never made it into real operation. It did serve Van Wijngaarden to bluff his way into the computer age by midsummer 1952. Not until January 1954 did the computing department have a working stored program computer, which for reasons of policy went under the same name: ARRA. After just one other machine, the ARMAC, had been produced, a separate company, Electrologica, was set up for the manufacture of computers, which produced the rather successful X1 computer. The combination of ambition and absence of a working machine lead to a high level of work on programming, way beyond the usual ideas of libraries of subroutines. Edsger W. Dijkstra in particular led the way to an emphasis on the duties of the programmer within the pattern of numerical analysis. Programs generating programs, known elsewhere as autocoding systems, were at the 'Mathematisch Centrum' called 'superprograms'. Practical examples were usually called a 'complex', in Dutch, where in English one might say 'system'. Historically, this is where software begins. Dekker's matrix complex, Dijkstra's interrupt system, Dijkstra and Zonneveld's ALGOL compiler--which for housekeeping contained 'the complex'--were actual examples of such super programs. In 1960 this compiler gave the Mathematical Center a leading edge in the early development of software.

  12. Optical reversible programmable Boolean logic unit.

    PubMed

    Chattopadhyay, Tanay

    2012-07-20

    Computing with reversibility is the only way to avoid dissipation of energy associated with bit erase. So, a reversible microprocessor is required for future computing. In this paper, a design of a simple all-optical reversible programmable processor is proposed using a polarizing beam splitter, liquid crystal-phase spatial light modulators, a half-wave plate, and plane mirrors. This circuit can perform 16 logical operations according to three programming inputs. Also, inputs can be easily recovered from the outputs. It is named the "reversible programmable Boolean logic unit (RPBLU)." The logic unit is the basic building block of many complex computational operations. Hence the design is important in sense. Two orthogonally polarized lights are defined here as two logical states, respectively.

  13. Do Global Cities Enable Global Views? Using Twitter to Quantify the Level of Geographical Awareness of U.S. Cities

    PubMed Central

    Han, Su Yeon; Tsou, Ming-Hsiang; Clarke, Keith C.

    2015-01-01

    Dynamic social media content, such as Twitter messages, can be used to examine individuals’ beliefs and perceptions. By analyzing Twitter messages, this study examines how Twitter users exchanged and recognized toponyms (city names) for different cities in the United States. The frequency and variety of city names found in their online conversations were used to identify the unique spatiotemporal patterns of “geographical awareness” for Twitter users. A new analytic method, Knowledge Discovery in Cyberspace for Geographical Awareness (KDCGA), is introduced to help identify the dynamic spatiotemporal patterns of geographic awareness among social media conversations. Twitter data were collected across 50 U.S. cities. Thousands of city names around the world were extracted from a large volume of Twitter messages (over 5 million tweets) by using the Twitter Application Programming Interface (APIs) and Python language computer programs. The percentages of distant city names (cities located in distant states or other countries far away from the locations of Twitter users) were used to estimate the level of global geographical awareness for Twitter users in each U.S. city. A Global awareness index (GAI) was developed to quantify the level of geographical awareness of Twitter users from within the same city. Our findings are that: (1) the level of geographical awareness varies depending on when and where Twitter messages are posted, yet Twitter users from big cities are more aware of the names of international cities or distant US cities than users from mid-size cities; (2) Twitter users have an increased awareness of other city names far away from their home city during holiday seasons; and (3) Twitter users are more aware of nearby city names than distant city names, and more aware of big city names rather than small city names. PMID:26167942

  14. Nuclear Weapon Environment Model. Volume II. Computer Code User’s Guide.

    DTIC Science & Technology

    1979-02-01

    J.R./IfW-09obArt AT NAME AND ADDRESS 10 PROGRAM ELEMENT PROJECT. TASK ’A a *0 RK UONGANIZATION TRW Defense and Space Systems GroupA 8WOKUINMES One...SIZE I I& DENSITY / DENSITY ZERO ,-NO OR TIME TOO YES LARGE? I CALL SIZER I r SETUP GRID IDIAGNOSTICI -7 PRINT DESIRED NOY-LOOP .? D I INCREMENT Y I I

  15. Quasi-isotropic VHF antenna array design study for the International Ultraviolet Explorer satellite

    NASA Technical Reports Server (NTRS)

    Raines, J. K.

    1975-01-01

    Results of a study to design a quasi-isotropic VHF antenna array for the IUE satellite are presented. A free space configuration was obtained that has no nulls deeper than -6.4 dbi in each of two orthogonal polarizations. A computer program named SOAP that analyzes the electromagnetic interaction between antennas and complicated conducting bodies, such as satellites was developed.

  16. Environment of Space Interactions with Space Systems

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The primary product of this research project was a computer program named SAVANT. This program uses the Displacement Damage Dose (DDD) method of calculating radiation damage to solar cells. This calculation method was developed at the Naval Research Laboratory, and uses fundamental physical properties of the solar cell materials to predict radiation damage to the solar cells. This means that fewer experimental measurements are required to characterize the radiation damage to the cells, which results in a substantial cost savings to qualify solar cells for orbital missions. In addition, the DDD method makes it easier to characterize cells that are already being used, but have not been fully tested using the older technique of characterizing radiation damage. The computer program combines an orbit generator with NASA's AP-8 and AE-8 models of trapped protons and electrons. This allows the user to specify an orbit, and the program will calculate how the spacecraft moves during the mission, and the radiation environment that it encounters. With the spectrum of the particles, the program calculates how they would slow down while traversing the coverglass, and provides a slowed-down spectrum.

  17. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2014-04-29

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  18. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2011-02-01

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  19. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2012-03-20

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  20. OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation

    NASA Astrophysics Data System (ADS)

    Young-S., Luis E.; Muruganandam, Paulsamy; Adhikari, Sadhan K.; Lončar, Vladimir; Vudragović, Dušan; Balaž, Antun

    2017-11-01

    We present Open Multi-Processing (OpenMP) version of Fortran 90 programs for solving the Gross-Pitaevskii (GP) equation for a Bose-Einstein condensate in one, two, and three spatial dimensions, optimized for use with GNU and Intel compilers. We use the split-step Crank-Nicolson algorithm for imaginary- and real-time propagation, which enables efficient calculation of stationary and non-stationary solutions, respectively. The present OpenMP programs are designed for computers with multi-core processors and optimized for compiling with both commercially-licensed Intel Fortran and popular free open-source GNU Fortran compiler. The programs are easy to use and are elaborated with helpful comments for the users. All input parameters are listed at the beginning of each program. Different output files provide physical quantities such as energy, chemical potential, root-mean-square sizes, densities, etc. We also present speedup test results for new versions of the programs. Program files doi:http://dx.doi.org/10.17632/y8zk3jgn84.2 Licensing provisions: Apache License 2.0 Programming language: OpenMP GNU and Intel Fortran 90. Computer: Any multi-core personal computer or workstation with the appropriate OpenMP-capable Fortran compiler installed. Number of processors used: All available CPU cores on the executing computer. Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1888; ibid.204 (2016) 209. Does the new version supersede the previous version?: Not completely. It does supersede previous Fortran programs from both references above, but not OpenMP C programs from Comput. Phys. Commun. 204 (2016) 209. Nature of problem: The present Open Multi-Processing (OpenMP) Fortran programs, optimized for use with commercially-licensed Intel Fortran and free open-source GNU Fortran compilers, solve the time-dependent nonlinear partial differential (GP) equation for a trapped Bose-Einstein condensate in one (1d), two (2d), and three (3d) spatial dimensions for six different trap symmetries: axially and radially symmetric traps in 3d, circularly symmetric traps in 2d, fully isotropic (spherically symmetric) and fully anisotropic traps in 2d and 3d, as well as 1d traps, where no spatial symmetry is considered. Solution method: We employ the split-step Crank-Nicolson algorithm to discretize the time-dependent GP equation in space and time. The discretized equation is then solved by imaginary- or real-time propagation, employing adequately small space and time steps, to yield the solution of stationary and non-stationary problems, respectively. Reasons for the new version: Previously published Fortran programs [1,2] have now become popular tools [3] for solving the GP equation. These programs have been translated to the C programming language [4] and later extended to the more complex scenario of dipolar atoms [5]. Now virtually all computers have multi-core processors and some have motherboards with more than one physical computer processing unit (CPU), which may increase the number of available CPU cores on a single computer to several tens. The C programs have been adopted to be very fast on such multi-core modern computers using general-purpose graphic processing units (GPGPU) with Nvidia CUDA and computer clusters using Message Passing Interface (MPI) [6]. Nevertheless, previously developed Fortran programs are also commonly used for scientific computation and most of them use a single CPU core at a time in modern multi-core laptops, desktops, and workstations. Unless the Fortran programs are made aware and capable of making efficient use of the available CPU cores, the solution of even a realistic dynamical 1d problem, not to mention the more complicated 2d and 3d problems, could be time consuming using the Fortran programs. Previously, we published auto-parallel Fortran programs [2] suitable for Intel (but not GNU) compiler for solving the GP equation. Hence, a need for the full OpenMP version of the Fortran programs to reduce the execution time cannot be overemphasized. To address this issue, we provide here such OpenMP Fortran programs, optimized for both Intel and GNU Fortran compilers and capable of using all available CPU cores, which can significantly reduce the execution time. Summary of revisions: Previous Fortran programs [1] for solving the time-dependent GP equation in 1d, 2d, and 3d with different trap symmetries have been parallelized using the OpenMP interface to reduce the execution time on multi-core processors. There are six different trap symmetries considered, resulting in six programs for imaginary-time propagation and six for real-time propagation, totaling to 12 programs included in BEC-GP-OMP-FOR software package. All input data (number of atoms, scattering length, harmonic oscillator trap length, trap anisotropy, etc.) are conveniently placed at the beginning of each program, as before [2]. Present programs introduce a new input parameter, which is designated by Number_of_Threads and defines the number of CPU cores of the processor to be used in the calculation. If one sets the value 0 for this parameter, all available CPU cores will be used. For the most efficient calculation it is advisable to leave one CPU core unused for the background system's jobs. For example, on a machine with 20 CPU cores such that we used for testing, it is advisable to use up to 19 CPU cores. However, the total number of used CPU cores can be divided into more than one job. For instance, one can run three simulations simultaneously using 10, 4, and 5 CPU cores, respectively, thus totaling to 19 used CPU cores on a 20-core computer. The Fortran source programs are located in the directory src, and can be compiled by the make command using the makefile in the root directory BEC-GP-OMP-FOR of the software package. The examples of produced output files can be found in the directory output, although some large density files are omitted, to save space. The programs calculate the values of actually used dimensionless nonlinearities from the physical input parameters, where the input parameters correspond to the identical nonlinearity values as in the previously published programs [1], so that the output files of the old and new programs can be directly compared. The output files are conveniently named such that their contents can be easily identified, following the naming convention introduced in Ref. [2]. For example, a file named -out.txt, where is a name of the individual program, represents the general output file containing input data, time and space steps, nonlinearity, energy and chemical potential, and was named fort.7 in the old Fortran version of programs [1]. A file named -den.txt is the output file with the condensate density, which had the names fort.3 and fort.4 in the old Fortran version [1] for imaginary- and real-time propagation programs, respectively. Other possible density outputs, such as the initial density, are commented out in the programs to have a simpler set of output files, but users can uncomment and re-enable them, if needed. In addition, there are output files for reduced (integrated) 1d and 2d densities for different programs. In the real-time programs there is also an output file reporting the dynamics of evolution of root-mean-square sizes after a perturbation is introduced. The supplied real-time programs solve the stationary GP equation, and then calculate the dynamics. As the imaginary-time programs are more accurate than the real-time programs for the solution of a stationary problem, one can first solve the stationary problem using the imaginary-time programs, adapt the real-time programs to read the pre-calculated wave function and then study the dynamics. In that case the parameter NSTP in the real-time programs should be set to zero and the space mesh and nonlinearity parameters should be identical in both programs. The reader is advised to consult our previous publication where a complete description of the output files is given [2]. A readme.txt file, included in the root directory, explains the procedure to compile and run the programs. We tested our programs on a workstation with two 10-core Intel Xeon E5-2650 v3 CPUs. The parameters used for testing are given in sample input files, provided in the corresponding directory together with the programs. In Table 1 we present wall-clock execution times for runs on 1, 6, and 19 CPU cores for programs compiled using Intel and GNU Fortran compilers. The corresponding columns "Intel speedup" and "GNU speedup" give the ratio of wall-clock execution times of runs on 1 and 19 CPU cores, and denote the actual measured speedup for 19 CPU cores. In all cases and for all numbers of CPU cores, although the GNU Fortran compiler gives excellent results, the Intel Fortran compiler turns out to be slightly faster. Note that during these tests we always ran only a single simulation on a workstation at a time, to avoid any possible interference issues. Therefore, the obtained wall-clock times are more reliable than the ones that could be measured with two or more jobs running simultaneously. We also studied the speedup of the programs as a function of the number of CPU cores used. The performance of the Intel and GNU Fortran compilers is illustrated in Fig. 1, where we plot the speedup and actual wall-clock times as functions of the number of CPU cores for 2d and 3d programs. We see that the speedup increases monotonically with the number of CPU cores in all cases and has large values (between 10 and 14 for 3d programs) for the maximal number of cores. This fully justifies the development of OpenMP programs, which enable much faster and more efficient solving of the GP equation. However, a slow saturation in the speedup with the further increase in the number of CPU cores is observed in all cases, as expected. The speedup tends to increase for programs in higher dimensions, as they become more complex and have to process more data. This is why the speedups of the supplied 2d and 3d programs are larger than those of 1d programs. Also, for a single program the speedup increases with the size of the spatial grid, i.e., with the number of spatial discretization points, since this increases the amount of calculations performed by the program. To demonstrate this, we tested the supplied real2d-th program and varied the number of spatial discretization points NX=NY from 20 to 1000. The measured speedup obtained when running this program on 19 CPU cores as a function of the number of discretization points is shown in Fig. 2. The speedup first increases rapidly with the number of discretization points and eventually saturates. Additional comments: Example inputs provided with the programs take less than 30 minutes to run on a workstation with two Intel Xeon E5-2650 v3 processors (2 QPI links, 10 CPU cores, 25 MB cache, 2.3 GHz).

  1. A Computing Platform for Parallel Sparse Matrix Computations

    DTIC Science & Technology

    2016-01-05

    REPORT NUMBER 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Ahmed Sameh Ahmed H. Sameh, Alicia Klinvex, Yao Zhu 611103 c. THIS PAGE The...PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Discipline Yao Zhu 0.50 Alicia Klinvex 0.10 0.60 2 Names of Post Doctorates Names of Faculty Supported...PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: NAME Total Number: NAME Total Number: Yao Zhu Alicia Klinvex 2 ...... ...... Sub Contractors (DD882) Names of other

  2. GKS utilities for FORTRAN-77

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beach, R.C.

    1992-01-01

    This document describes a number of subroutines that can be useful in GKS graphic applications programmed in FORTRAN-77. The algorithms described here include subroutines to do the following: (1) Draw text characters in a more flexible manner than is possible with basic GKS. (2) Project two-dimensional and three-dimensional space onto two-dimensional space. (3) Draw smooth curves. (4) Draw two-dimensional projections of complex three-dimensional objects. FORTRAN-77 is described in American National Standard, Programming Language, FORTRAN. GKS is described in American National Standard for Information Systems: Computer Graphics -- Graphical Kernel System (GKS) Functional Description and the FORTRAN-77 interface is described inmore » American National Standard for Information Systems: Computer Graphics -- Graphical Kernel System (GKS) FORTRAN Binding. All of the subroutine names and additional enumeration types that will be described in this document begin with the letters ``GZ.`` Since GKS itself does not have any subroutine names or enumeration types that begin with these letters, no confusion between the usual GKS subroutines and the ones described here should occur. Many concepts will have to be defined in the following chapters. When a concept is first encountered, it will be given in italics. The information around the italicized word or phrase may be taken as its definition.« less

  3. GKS utilities for FORTRAN-77

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beach, R.C.

    1992-01-01

    This document describes a number of subroutines that can be useful in GKS graphic applications programmed in FORTRAN-77. The algorithms described here include subroutines to do the following: (1) Draw text characters in a more flexible manner than is possible with basic GKS. (2) Project two-dimensional and three-dimensional space onto two-dimensional space. (3) Draw smooth curves. (4) Draw two-dimensional projections of complex three-dimensional objects. FORTRAN-77 is described in American National Standard, Programming Language, FORTRAN. GKS is described in American National Standard for Information Systems: Computer Graphics -- Graphical Kernel System (GKS) Functional Description and the FORTRAN-77 interface is described inmore » American National Standard for Information Systems: Computer Graphics -- Graphical Kernel System (GKS) FORTRAN Binding. All of the subroutine names and additional enumeration types that will be described in this document begin with the letters GZ.'' Since GKS itself does not have any subroutine names or enumeration types that begin with these letters, no confusion between the usual GKS subroutines and the ones described here should occur. Many concepts will have to be defined in the following chapters. When a concept is first encountered, it will be given in italics. The information around the italicized word or phrase may be taken as its definition.« less

  4. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  5. Shared Memory Parallelization of an Implicit ADI-type CFD Code

    NASA Technical Reports Server (NTRS)

    Hauser, Th.; Huang, P. G.

    1999-01-01

    A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.

  6. Small business innovation research: Abstracts of 1984. Phase 1 awards

    NASA Technical Reports Server (NTRS)

    1985-01-01

    On September 27, 1984, the National Aeronautics and Space Administration announced the selection of Phase I projects for the Small Business Innovation Research Program. These awards resulted from the evaluation of proposals submitted in response to the 1984 Program Solicitation, SBIR 84-1. In order to make available information on the technical content of the Phase I projects supported by the NASA SBIR Program, the abstracts of those proposals which resulted in awards of contracts are given. In addition, the name and address of the firm performing the work are given for those who may desired additional information about the project. Propulsion, aerodynamics, computer techniques, exobiology and composite materials are among the areas covered.

  7. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diachin, L F; Garaizar, F X; Henson, V E

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less

  8. MLP Tools: a PyMOL plugin for using the molecular lipophilicity potential in computer-aided drug design

    NASA Astrophysics Data System (ADS)

    Oberhauser, Nils; Nurisso, Alessandra; Carrupt, Pierre-Alain

    2014-05-01

    The molecular lipophilicity potential (MLP) is a well-established method to calculate and visualize lipophilicity on molecules. We are here introducing a new computational tool named MLP Tools, written in the programming language Python, and conceived as a free plugin for the popular open source molecular viewer PyMOL. The plugin is divided into several sub-programs which allow the visualization of the MLP on molecular surfaces, as well as in three-dimensional space in order to analyze lipophilic properties of binding pockets. The sub-program Log MLP also implements the virtual log P which allows the prediction of the octanol/water partition coefficients on multiple three-dimensional conformations of the same molecule. An implementation on the recently introduced MLP GOLD procedure, improving the GOLD docking performance in hydrophobic pockets, is also part of the plugin. In this article, all functions of the MLP Tools will be described through a few chosen examples.

  9. A comparison of fitness-case sampling methods for genetic programming

    NASA Astrophysics Data System (ADS)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  10. Further Improvement in 3DGRAPE

    NASA Technical Reports Server (NTRS)

    Alter, Stephen

    2004-01-01

    3DGRAPE/AL:V2 denotes version 2 of the Three-Dimensional Grids About Anything by Poisson's Equation with Upgrades from Ames and Langley computer program. The preceding version, 3DGRAPE/AL, was described in Improved 3DGRAPE (ARC-14069) NASA Tech Briefs, Vol. 21, No. 5 (May 1997), page 66. These programs are so named because they generate volume grids by iteratively solving Poisson's Equation in three dimensions. The grids generated by the various versions of 3DGRAPE have been used in computational fluid dynamics (CFD). The main novel feature of 3DGRAPE/AL:V2 is the incorporation of an optional scheme in which anisotropic Lagrange-based trans-finite interpolation (ALBTFI) is coupled with exponential decay functions to compute and blend interior source terms. In the input to 3DGRAPE/AL:V2 the user can specify whether or not to invoke ALBTFI in combination with exponential-decay controls, angles, and cell size for controlling the character of grid lines. Of the known programs that solve elliptic partial differential equations for generating grids, 3DGRAPE/AL:V2 is the only code that offers a combination of speed and versatility with most options for controlling the densities and other characteristics of grids for CFD.

  11. SCINFUL: A Monte Carlo based computer program to determine a scintillator full energy response to neutron detection for E/sub n/ between 0. 1 and 80 MeV: User's manual and FORTRAN program listing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickens, J.K.

    1988-03-01

    This document provides a complete listing of the FORTRAN progran SCINFUL, a program designed to provide a calculated full response anticipated for either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator. The incident design neutron energy range is 0.1 to 80 MeV. Preparation of input to the program is discussed as are important features of the output. Also included is a FORTRAN listing of a subsidiary program applicable to the output of SCINFUL. This user-interactive program is named SCINSPEC from which the output of SCINFUL may be reformatted into a standard spectrum form involving either equal light-unit or equalmore » protran-energy intervals. Examples of input to this program and corresponding output are given.« less

  12. Secure and Efficient Network Fault Localization

    DTIC Science & Technology

    2012-02-27

    ORGANIZATION NAME(S) AND ADDRESS (ES) Carnegie Mellon University,School of Computer Science,Computer Science Department,Pittsburgh,PA,15213 8. PERFORMING...ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT...efficiency than previously known protocols for fault localization. Our proposed fault localization protocols also address the security threats that

  13. Metamaterial-Based Cylinders Used for Invisible Cloak Realization

    DTIC Science & Technology

    2011-08-01

    Branimir Ivsic Tin Komljenovic University of Zagreb Faculty of Electrical Engineering and Computing Unska 3 Zagreb , Croatia HR-10000...NUMBER 5e. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Zagreb Faculty of Electrical Engineering and Computing...Unska 3 Zagreb , Croatia HR-10000 8. PERFORMING ORGANIZATION REPORT NUMBER N/A 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS

  14. A Heterogeneous High-Performance System for Computational and Computer Science

    DTIC Science & Technology

    2016-11-15

    Patents Submitted Patents Awarded Awards Graduate Students Names of Post Doctorates Names of Faculty Supported Names of Under Graduate students supported...team of research faculty from the departments of computer science and natural science at Bowie State University. The supercomputer is not only to...accelerated HPC systems. The supercomputer is also ideal for the research conducted in the Department of Natural Science, as research faculty work on

  15. SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2008-01-01

    This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.

  16. Implementation of a Proposed System for Automated Microcode Generation.

    DTIC Science & Technology

    1984-12-01

    Marcia Elaine Provance 6 . PERFORMING ORGANIZATION NAME AND ADDRESS t0. PROGRAM ELEMENT, PROJECT, TASK AREA 6 WORK UNIT NUMBERS * Naval Postgraduate...The proposed microprogramming system is organized around a series of menus which are presented to a microprograrmuer so that she can build FDC 1473... organization is not affected. A computer can also respond more easily to new performance demands and problem solutions. A richer or a larger instruction set

  17. A Critical Analysis of U.S. Army Accessions through Socioeconomic Consideration between 1970 and 1984.

    DTIC Science & Technology

    1985-06-01

    ADDRESS 10. PROGRAM ELEMENT, PROJECT. TASK AREA & WORK UNIT NUMBERS Naval Postgraduate School Monterey, California 93943 11. CONTROLLING OFFICE NAME AND...determine the sccioeccnomic representativeness of the Army’s enlistees in that iarticular year. In addition, the socioeconomic overviev of Republic cf...accomplished with the use of the Statistical Analysis System (SAS), an integrated computer system for data analysis. 32 TABLE 2 The States in Each District

  18. Micro-Controllable, Multi-Functional Interface Module for Digital MP: A Wearable Computer Security Application

    DTIC Science & Technology

    2004-05-01

    Army Soldier System Command: http://www.natick.armv.mil Role Name Facial Recognition Program Manager, Army Technical Lead Mark Chandler...security force with a facial recognition system. Mike Holloran, technology officer with the 6 Fleet, directed LCDR Hoa Ho and CAPT(s) Todd Morgan to...USN 6th Fleet was accomplished with the admiral expressing his support for continuing the evaluation of the a facial recognition system. This went

  19. What We Learn About Process Specification Languages from Studying Recipes

    DTIC Science & Technology

    1987-08-01

    minutes or until very lightly browned. Add carrot, shallots, onion, celery root, mushrooms, and garlic ,· *I I* stir well *I add_to( chop ...a computer program. Most recipes require that each ingredient receives a preprocessing treatment (eg. peel, chop ) before being used. Thus, we found...named operations that must be performed by a cook such as chop , set_heat, ... (see section 7 for more details on these), "active operations

  20. One Time Passwords in Everything (OPIE): Experiences with Building and Using Stringer Authentication

    DTIC Science & Technology

    1995-01-01

    opiepasswd(1). The name change brings it more in line with its UNIX counterpart passwd (1), which should make both programs easier to remember for users. This...char * passwd ) int opiehash(char *x, unsigned algorithm) The one-time password schemes implemented in OPIE, as rst described in [Hal94], compute a...seed, passwd ); while (sequence-- != 0) opiehash(result, algorithm); opiebtoe(result,words); Send words. : : : 6 Deployment Every machine that has

  1. Understanding and Improving High-Performance I/O Subsystems

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; Frieder, Gideon; Clark, A. James

    1996-01-01

    This research program has been conducted in the framework of the NASA Earth and Space Science (ESS) evaluations led by Dr. Thomas Sterling. In addition to the many important research findings for NASA and the prestigious publications, the program has helped orienting the doctoral research program of two students towards parallel input/output in high-performance computing. Further, the experimental results in the case of the MasPar were very useful and helpful to MasPar with which the P.I. has had many interactions with the technical management. The contributions of this program are drawn from three experimental studies conducted on different high-performance computing testbeds/platforms, and therefore presented in 3 different segments as follows: 1. Evaluating the parallel input/output subsystem of a NASA high-performance computing testbeds, namely the MasPar MP- 1 and MP-2; 2. Characterizing the physical input/output request patterns for NASA ESS applications, which used the Beowulf platform; and 3. Dynamic scheduling techniques for hiding I/O latency in parallel applications such as sparse matrix computations. This study also has been conducted on the Intel Paragon and has also provided an experimental evaluation for the Parallel File System (PFS) and parallel input/output on the Paragon. This report is organized as follows. The summary of findings discusses the results of each of the aforementioned 3 studies. Three appendices, each containing a key scholarly research paper that details the work in one of the studies are included.

  2. Calculating the Mean Amplitude of Glycemic Excursions from Continuous Glucose Data Using an Open-Code Programmable Algorithm Based on the Integer Nonlinear Method.

    PubMed

    Yu, Xuefei; Lin, Liangzhuo; Shen, Jie; Chen, Zhi; Jian, Jun; Li, Bin; Xin, Sherman Xuegang

    2018-01-01

    The mean amplitude of glycemic excursions (MAGE) is an essential index for glycemic variability assessment, which is treated as a key reference for blood glucose controlling at clinic. However, the traditional "ruler and pencil" manual method for the calculation of MAGE is time-consuming and prone to error due to the huge data size, making the development of robust computer-aided program an urgent requirement. Although several software products are available instead of manual calculation, poor agreement among them is reported. Therefore, more studies are required in this field. In this paper, we developed a mathematical algorithm based on integer nonlinear programming. Following the proposed mathematical method, an open-code computer program named MAGECAA v1.0 was developed and validated. The results of the statistical analysis indicated that the developed program was robust compared to the manual method. The agreement among the developed program and currently available popular software is satisfied, indicating that the worry about the disagreement among different software products is not necessary. The open-code programmable algorithm is an extra resource for those peers who are interested in the related study on methodology in the future.

  3. Cloud Computing in Higher Education Sector for Sustainable Development

    ERIC Educational Resources Information Center

    Duan, Yuchao

    2016-01-01

    Cloud computing is considered a new frontier in the field of computing, as this technology comprises three major entities namely: software, hardware and network. The collective nature of all these entities is known as the Cloud. This research aims to examine the impacts of various aspects namely: cloud computing, sustainability, performance…

  4. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  5. Solving bi-level optimization problems in engineering design using kriging models

    NASA Astrophysics Data System (ADS)

    Xia, Yi; Liu, Xiaojie; Du, Gang

    2018-05-01

    Stackelberg game-theoretic approaches are applied extensively in engineering design to handle distributed collaboration decisions. Bi-level genetic algorithms (BLGAs) and response surfaces have been used to solve the corresponding bi-level programming models. However, the computational costs for BLGAs often increase rapidly with the complexity of lower-level programs, and optimal solution functions sometimes cannot be approximated by response surfaces. This article proposes a new method, namely the optimal solution function approximation by kriging model (OSFAKM), in which kriging models are used to approximate the optimal solution functions. A detailed example demonstrates that OSFAKM can obtain better solutions than BLGAs and response surface-based methods, and at the same time reduce the workload of computation remarkably. Five benchmark problems and a case study of the optimal design of a thin-walled pressure vessel are also presented to illustrate the feasibility and potential of the proposed method for bi-level optimization in engineering design.

  6. Theoretical investigation on thermoelectric properties of (Ca,Sr,Ba)Fe2(As/Bi)2 compounds under temperature

    NASA Astrophysics Data System (ADS)

    Jayalakshmi, D. S.; Sundareswari, M.; Viswanathan, E.; Das, Abhijeet

    2018-04-01

    The electrical conductivity, resistivity and Seebeck coefficient, Pauli magnetic susceptibility and power factor are computed under temperature (100 K - 800 K) in steps of 100 K for the theoretically designed compounds namely (Ca,Sr,Ba)Fe2Bi2 and their parent compounds namely (Ca,Sr,Ba)Fe2As2 by using Boltzmann transport theory interfaced to the Wien2k program. The Bulk modulus, electron phonon coupling constant, thermoelectric figure of merit (ZT) and transition temperature are calculated for the optimized anti ferromagnetic phase of the proposed compounds. The results are discussed for the novel compounds in view of their superconductivity existence and compared with their parent unconventional superconducting compounds.

  7. Runway exit designs for capacity improvement demonstrations. Phase 2: Computer model development

    NASA Technical Reports Server (NTRS)

    Trani, A. A.; Hobeika, A. G.; Kim, B. J.; Nunna, V.; Zhong, C.

    1992-01-01

    The development is described of a computer simulation/optimization model to: (1) estimate the optimal locations of existing and proposed runway turnoffs; and (2) estimate the geometric design requirements associated with newly developed high speed turnoffs. The model described, named REDIM 2.0, represents a stand alone application to be used by airport planners, designers, and researchers alike to estimate optimal turnoff locations. The main procedures are described in detail which are implemented in the software package and possible applications are illustrated when using 6 major runway scenarios. The main output of the computer program is the estimation of the weighted average runway occupancy time for a user defined aircraft population. Also, the location and geometric characteristics of each turnoff are provided to the user.

  8. Development of an Interactive Computer Program to Produce Body Description Data

    DTIC Science & Technology

    1983-07-01

    arbitrary and has varied over the time that the CVS Program and the ATB Model have been in existence. Program GOOD produces data describing an upper torso...N NN NfU NJ JANNJ NN N5~SA NJN N a~mn ain itn ft atK 0 ,0 9a fK C ca I n k0 rC 91 01 tol s 6, -Inb v v P w Dvf 4oa 0 0 0 IS t. faa 0 o In - v - allT...NAMES/ SUSTYP(4),-SEGLAB(1 5)*J.JTLA9C¶14),PLTSY4!(29), 014NIN(-1 :31 )PTTLEPUNlITS( 3,-1:1) REAL MEAN(C:lp2:3)p STDEVCO:lp2: 3) CHARACTER SU83TYP*20

  9. A computer program to obtain time-correlated gust loads for nonlinear aircraft using the matched-filter-based method

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1994-01-01

    NASA Langley Research Center has, for several years, conducted research in the area of time-correlated gust loads for linear and nonlinear aircraft. The results of this work led NASA to recommend that the Matched-Filter-Based One-Dimensional Search Method be used for gust load analyses of nonlinear aircraft. This manual describes this method, describes a FORTRAN code which performs this method, and presents example calculations for a sample nonlinear aircraft model. The name of the code is MFD1DS (Matched-Filter-Based One-Dimensional Search). The program source code, the example aircraft equations of motion, a sample input file, and a sample program output are all listed in the appendices.

  10. Chippy's Computer Words.

    ERIC Educational Resources Information Center

    Willing, Kathlene R.; Girard, Suzanne

    Intended for young children just becoming familiar with computers, this naming book introduces and reinforces new computer vocabulary and concepts. The 20 words are presented alphabetically, along with illustrations, providing room for different activities in which children can match and name the pictures and words. The 20 vocabulary items are…

  11. An upgraded version of the generator BCVEGPY2.0 for hadronic production of B meson and its excited states

    NASA Astrophysics Data System (ADS)

    Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang

    2006-11-01

    An upgraded version of the package BCVEGPY2.0: [C.-H. Chang, J.-X. Wang, X.-G. Wu, Comput. Phys. Commun. 174 (2006) 241] is presented, which works under LINUX system and is named as BCVEGPY2.1. With the version and a GNU C compiler additionally, users may simulate the B-events in various experimental environments very conveniently. It has been manipulated in better modularity and code reusability (less cross communication among various modules) than BCVEGPY2.0 has. Furthermore, in the upgraded version a special execution is arranged as that the GNU command make compiles a requested code with the help of a master makefile in main code directory, and then builds an executable file with the default name run. Finally, this paper may also be considered as an erratum, i.e., typo errors in BCVEGPY2.0 and corrections accordingly have been listed. New version program (BCVEGPY2.1) summaryTitle of program: BCVEGPY2.1 Catalogue identifier: ADTJ_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTJ_v2_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to original program: BCVEGPY2.0 Reference in CPC: Comput. Phys. Commun. 174 (2006) 241 Does the new version supersede the old program: No Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX Programming language used: FORTRAN 77/90 Memory required to execute with typical data: About 2.0 MB No. of lines in distributed program, including test data, etc.: 31 521 No. of bytes in distributed program, including test data, etc.: 1 310 179 Distribution format: tar.gz Nature of physical problem: Hadronic production of B meson itself and its excited states Method of solution: The code with option can generate weighted and unweighted events. An interface to PYTHIA is provided to meet the needs of jets hadronization in the production. Restrictions on the complexity of the problem: The hadronic production of (cb¯)-quarkonium in S-wave and P-wave states via the mechanism of gluon-gluon fusion are given by the so-called 'complete calculation' approach. Reasons for new version: Responding to the feedback from users, we rearrange the program in a convenient way and then it can be easily adopted by the users to do the simulations according to their own experimental environment (e.g. detector acceptances and experimental cuts). We have paid many efforts to rearrange the program into several modules with less cross communication among the modules, the main program is slimmed down and all the further actions are decoupled from the main program and can be easily called for various purposes. Typical running time: The typical running time is machine and user-parameters dependent. Typically, for production of the S-wave (cb¯)-quarkonium, when IDWTUP = 1, it takes about 20 hour on a 1.8 GHz Intel P4-processor machine to generate 1000 events; however, when IDWTUP = 3, to generate 10 6 events it takes about 40 minutes only. Of the production, the time for the P-wave (cb¯)-quarkonium will take almost two times longer than that for its S-wave quarkonium. Summary of the changes (improvements): (1) The structure and organization of the program have been changed a lot. The new version package BCVEGPY2.1 has been divided into several modules with less cross communication among the modules (some old version source files are divided into several parts for the purpose). The main program is slimmed down and all the further actions are decoupled from the main program so that they can be easily called for various applications. All of the Fortran codes are organized in the main code directory named as bcvegpy2.1, which contains the main program, all of its prerequisite files and subsidiary 'folders' (subdirectory to the main code directory). The method for setting the parameter is the same as that of the previous versions [C.-H. Chang, C. Driouich, P. Eerola, X.-G. Wu, Comput. Phys. Commun. 159 (2004) 192, hep-ph/0309120. [1

  12. SORTAN: a Unix program for calculation and graphical presentation of fault slip as induced by stresses

    NASA Astrophysics Data System (ADS)

    Pascal, Christophe

    2004-04-01

    Stress inversion programs are nowadays frequently used in tectonic analysis. The purpose of this family of programs is to reconstruct the stress tensor characteristics from fault slip data acquired in the field or derived from earthquake focal mechanisms (i.e. inverse methods). Until now, little attention has been paid to direct methods (i.e. to determine fault slip directions from an inferred stress tensor). During the 1990s, the fast increase in resolution in 3D seismic reflection techniques made it possible to determine the geometry of subsurface faults with a satisfactory accuracy but not to determine precisely their kinematics. This recent improvement allows the use of direct methods. A computer program, namely SORTAN, is introduced. The program is highly portable on Unix platforms, straightforward to install and user-friendly. The computation is based on classical stress-fault slip relationships and allows for fast treatment of a set of faults and graphical presentation of the results (i.e. slip directions). In addition, the SORTAN program permits one to test the sensitivity of the results to input uncertainties. It is a complementary tool to classical stress inversion methods and can be used to check the mechanical consistency and the limits of structural interpretations based upon 3D seismic reflection surveys.

  13. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  14. Using Tutte polynomials to analyze the structure of the benzodiazepines

    NASA Astrophysics Data System (ADS)

    Cadavid Muñoz, Juan José

    2014-05-01

    Graph theory in general and Tutte polynomials in particular, are implemented for analyzing the chemical structure of the benzodiazepines. Similarity analysis are used with the Tutte polynomials for finding other molecules that are similar to the benzodiazepines and therefore that might show similar psycho-active actions for medical purpose, in order to evade the drawbacks associated to the benzodiazepines based medicine. For each type of benzodiazepines, Tutte polynomials are computed and some numeric characteristics are obtained, such as the number of spanning trees and the number of spanning forests. Computations are done using the computer algebra Maple's GraphTheory package. The obtained analytical results are of great importance in pharmaceutical engineering. As a future research line, the usage of the chemistry computational program named Spartan, will be used to extent and compare it with the obtained results from the Tutte polynomials of benzodiazepines.

  15. PATSTAGS - PATRAN-STAGSC-1 TRANSLATOR

    NASA Technical Reports Server (NTRS)

    Otte, N. E.

    1994-01-01

    PATSTAGS translates PATRAN finite model data into STAGS (Structural Analysis of General Shells) input records to be used for engineering analysis. The program reads data from a PATRAN neutral file and writes STAGS input records into a STAGS input file and a UPRESS data file. It is able to support translations of nodal constraints, nodal, element, force and pressure data. PATSTAGS uses three files: the PATRAN neutral file to be translated, a STAGS input file and a STAGS pressure data file. The user provides the names for the neutral file and the desired names of the STAGS files to be created. The pressure data file contains the element live pressure data used in the STAGS subroutine UPRESS. PATSTAGS is written in FORTRAN 77 for DEC VAX series computers running VMS. The main memory requirement for execution is approximately 790K of virtual memory. Output blocks can be modified to output the data in any format desired, allowing the program to be used to translate model data to analysis codes other than STAGSC-1 (HQN-10967). This program is available in DEC VAX BACKUP format on a 9-track magnetic tape or TK50 tape cartridge. Documentation is included in the price of the program. PATSTAGS was developed in 1990. DEC, VAX, TK50 and VMS are trademarks of Digital Equipment Corporation.

  16. Analysis of Data for the Development of Density and Composition Models of the Upper Atmosphere.

    DTIC Science & Technology

    1981-07-01

    8217 //( Luigi G.acchia / F19628-78-C-0126 / Jack W./Slowey " _ S. PERFORIONG ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Smithsonian...Variation: Overviev . ......so.*. 6 rIII. Longitudinally Averaged Model ............0 9 IV. Local-time Dependent Model...............o 13 V.* Future Work...are no significant residuals between the observed and computed values at the equator for even the highest levels of geomagnetic activity. The equatorial

  17. Access Point Selection for Multi-Rate IEEE 802.11 Wireless LANs

    DTIC Science & Technology

    2014-05-16

    Mobile Systems, Applications and Services, 2006. [2] S . Vasudevan, K. Papagiannaki, C . Diot, J. Kurose, and D. Towsley, “Facilitating Access Point...LANs 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7...PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) University of California at Berkeley,Electrical Engineering and Computer Sciences,Berkeley,CA,94720 8

  18. User-Computer Interactions: Some Problems for Human Factors Research

    DTIC Science & Technology

    1981-09-01

    accessibility from the work place or home of R. information stored in major repositories. o Two-way real-time communication between broadcasting - facilities...Miller, and R.W. Pew (BBN Inc.) MDA 903-80-C-0551 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK AREA & WORK UNIT NUMBERS...average U.S. home has gone from about 10 in 1940 to about 100 in 1960 to a few thousand in 1930. Collectively, these trends represent an enormous

  19. FIESTA ROC: A new finite element analysis program for solar cell simulation

    NASA Technical Reports Server (NTRS)

    Clark, Ralph O.

    1991-01-01

    The Finite Element Semiconductor Three-dimensional Analyzer by Ralph O. Clark (FIESTA ROC) is a computational tool for investigating in detail the performance of arbitrary solar cell structures. As its name indicates, it uses the finite element technique to solve the fundamental semiconductor equations in the cell. It may be used for predicting the performance (thereby dictating the design parameters) of a proposed cell or for investigating the limiting factors in an established design.

  20. Syntax diagrams for body wave nomenclature, with generalizations for terrestrial planets

    NASA Astrophysics Data System (ADS)

    Knapmeyer, M.

    2003-04-01

    The Apollo network on the Moon constitutes the beginning of planetary seismology. In the next few decades, we may see seismometers deployed on the Moon again, on Mars, and perhaps on other terrestrial planets or satellites. Any seismological software for computation of body wave travel times on other planets should be highly versatile and be prepared for a huge variety of velocity distributions and internal structures. A suite of trial models for a planet might, for example, contain models with and without solid inner cores. It would then be useful if the software could detect physically meaningless phase names automatically without actually carrying out any computation. It would also be useful if the program were prepared to deal with features like fully solid cores, internal oceans, and varying depths of mineralogical phase changes like the olivine-spinel transition. Syntax diagrams are a standard method to describe the syntax of programming languages. They represent a graphical way to define which letter or phrase is allowed to follow a given sequence of letters. Syntax diagrams may be stored in data structures that allow automatic evaluation of a given letter sequence. Such diagrams are presented here for a generalized body wave nomenclature. Generalizations are made to overcome earth-specific notations which incorporate discontinuity depths into phase names or to distinguish olivine transitions from ice-ice transitions (as expected on the Galilean Satellites).

  1. Scalability and Validation of Big Data Bioinformatics Software.

    PubMed

    Yang, Andrian; Troup, Michael; Ho, Joshua W K

    2017-01-01

    This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.

  2. FastChem: A computer program for efficient complex chemical equilibrium calculations in the neutral/ionized gas phase with applications to stellar and planetary atmospheres

    NASA Astrophysics Data System (ADS)

    Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin

    2018-06-01

    For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.

  3. High Performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions

    DTIC Science & Technology

    2016-08-30

    High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions A dedicated high-performance computer cluster was...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Computer cluster ...peer-reviewed journals: Final Report: High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions Report Title A dedicated

  4. MODFLOW-2000, the U.S. Geological Survey modular ground-water model -- Documentation of MOD-PREDICT for predictions, prediction sensitivity analysis, and evaluation of uncertainty

    USGS Publications Warehouse

    Tonkin, M.J.; Hill, Mary C.; Doherty, John

    2003-01-01

    This document describes the MOD-PREDICT program, which helps evaluate userdefined sets of observations, prior information, and predictions, using the ground-water model MODFLOW-2000. MOD-PREDICT takes advantage of the existing Observation and Sensitivity Processes (Hill and others, 2000) by initiating runs of MODFLOW-2000 and using the output files produced. The names and formats of the MODFLOW-2000 input files are unchanged, such that full backward compatibility is maintained. A new name file and input files are required for MOD-PREDICT. The performance of MOD-PREDICT has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program using the email address available at the web address below. Updates might occasionally be made to this document, to the MOD-PREDICT program, and to MODFLOW- 2000. Users can check for updates on the Internet at URL http://water.usgs.gov/software/ground water.html/.

  5. Tempest: GPU-CPU computing for high-throughput database spectral matching.

    PubMed

    Milloy, Jeffrey A; Faherty, Brendan K; Gerber, Scott A

    2012-07-06

    Modern mass spectrometers are now capable of producing hundreds of thousands of tandem (MS/MS) spectra per experiment, making the translation of these fragmentation spectra into peptide matches a common bottleneck in proteomics research. When coupled with experimental designs that enrich for post-translational modifications such as phosphorylation and/or include isotopically labeled amino acids for quantification, additional burdens are placed on this computational infrastructure by shotgun sequencing. To address this issue, we have developed a new database searching program that utilizes the massively parallel compute capabilities of a graphical processing unit (GPU) to produce peptide spectral matches in a very high throughput fashion. Our program, named Tempest, combines efficient database digestion and MS/MS spectral indexing on a CPU with fast similarity scoring on a GPU. In our implementation, the entire similarity score, including the generation of full theoretical peptide candidate fragmentation spectra and its comparison to experimental spectra, is conducted on the GPU. Although Tempest uses the classical SEQUEST XCorr score as a primary metric for evaluating similarity for spectra collected at unit resolution, we have developed a new "Accelerated Score" for MS/MS spectra collected at high resolution that is based on a computationally inexpensive dot product but exhibits scoring accuracy similar to that of the classical XCorr. In our experience, Tempest provides compute-cluster level performance in an affordable desktop computer.

  6. 22 CFR Appendix C to Part 62 - Update of Information on Exchange-Visitor Program Sponsor

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... assigned to ________ as follows: (Name of institution/organization) 1. Change the name of the Program... Cultural Exchange. 9. ( ) Cancel the above named Exchange Visitor Program. (Signature of Responsible or...

  7. US GeoData: Digital cartographic and geographic data

    USGS Publications Warehouse

    ,

    1985-01-01

    The increasing use of computers for storing and analyzing earth science information has sparked a growth in the demand for various types of cartographic data in digital form. The production of map data in computerized form is called digital cartography, and it involves the collection, storage, processing, analysis, and display of map data with the aid of computers. The U.S. Geological Survey, the Nation's largest earth science research agency, has expanded its national mapping program to incorporate operations associated with digital cartography, including the collection of planimetric, elevation, and geographic names information in digital form. This digital information is available for use in meeting the multipurpose needs and applications of the map user community.

  8. Advanced Simulation and Computing: A Summary Report to the Director's Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, M G; Peck, T

    2003-06-01

    It has now been three years since the Advanced Simulation and Computing Program (ASCI), as managed by Defense and Nuclear Technologies (DNT) Directorate, has been reviewed by this Director's Review Committee (DRC). Since that time, there has been considerable progress for all components of the ASCI Program, and these developments will be highlighted in this document and in the presentations planned for June 9 and 10, 2003. There have also been some name changes. Today, the Program is called ''Advanced Simulation and Computing,'' Although it retains the familiar acronym ASCI, the initiative nature of the effort has given way tomore » sustained services as an integral part of the Stockpile Stewardship Program (SSP). All computing efforts at LLNL and the other two Defense Program (DP) laboratories are funded and managed under ASCI. This includes the so-called legacy codes, which remain essential tools in stockpile stewardship. The contract between the Department of Energy (DOE) and the University of California (UC) specifies an independent appraisal of Directorate technical work and programmatic management. Such represents the work of this DNT Review Committee. Beginning this year, the Laboratory is implementing a new review system. This process was negotiated between UC, the National Nuclear Security Administration (NNSA), and the Laboratory Directors. Central to this approach are eight performance objectives that focus on key programmatic and administrative goals. Associated with each of these objectives are a number of performance measures to more clearly characterize the attainment of the objectives. Each performance measure has a lead directorate and one or more contributing directorates. Each measure has an evaluation plan and has identified expected documentation to be included in the ''Assessment File''.« less

  9. 2008 ALCF annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drugan, C.

    2009-12-07

    The word 'breakthrough' aptly describes the transformational science and milestones achieved at the Argonne Leadership Computing Facility (ALCF) throughout 2008. The number of research endeavors undertaken at the ALCF through the U.S. Department of Energy's (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program grew from 9 in 2007 to 20 in 2008. The allocation of computer time awarded to researchers on the Blue Gene/P also spiked significantly - from nearly 10 million processor hours in 2007 to 111 million in 2008. To support this research, we expanded the capabilities of Intrepid, an IBM Blue Gene/P systemmore » at the ALCF, to 557 teraflops (TF) for production use. Furthermore, we enabled breakthrough levels of productivity and capability in visualization and data analysis with Eureka, a powerful installation of NVIDIA Quadro Plex S4 external graphics processing units. Eureka delivered a quantum leap in visual compute density, providing more than 111 TF and more than 3.2 terabytes of RAM. On April 21, 2008, the dedication of the ALCF realized DOE's vision to bring the power of the Department's high performance computing to open scientific research. In June, the IBM Blue Gene/P supercomputer at the ALCF debuted as the world's fastest for open science and third fastest overall. No question that the science benefited from this growth and system improvement. Four research projects spearheaded by Argonne National Laboratory computer scientists and ALCF users were named to the list of top ten scientific accomplishments supported by DOE's Advanced Scientific Computing Research (ASCR) program. Three of the top ten projects used extensive grants of computing time on the ALCF's Blue Gene/P to model the molecular basis of Parkinson's disease, design proteins at atomic scale, and create enzymes. As the year came to a close, the ALCF was recognized with several prestigious awards at SC08 in November. We provided resources for Linear Scaling Divide-and-Conquer Electronic Structure Calculations for Thousand Atom Nanostructures, a collaborative effort between Argonne, Lawrence Berkeley National Laboratory, and Oak Ridge National Laboratory that received the ACM Gordon Bell Prize Special Award for Algorithmic Innovation. The ALCF also was named a winner in two of the four categories in the HPC Challenge best performance benchmark competition.« less

  10. Computer-generated mineral commodity deposit maps

    USGS Publications Warehouse

    Schruben, Paul G.; Hanley, J. Thomas

    1983-01-01

    This report describes an automated method of generating deposit maps of mineral commodity information. In addition, it serves as a user's manual for the authors' mapping system. Procedures were developed which allow commodity specialists to enter deposit information, retrieve selected data, and plot deposit symbols in any geographic area within the conterminous United States. The mapping system uses both micro- and mainframe computers. The microcomputer is used to input and retrieve information, thus minimizing computing charges. The mainframe computer is used to generate map plots which are printed by a Calcomp plotter. Selector V data base system is employed for input and retrieval on the microcomputer. A general mapping program (Genmap) was written in FORTRAN for use on the mainframe computer. Genmap can plot fifteen symbol types (for point locations) in three sizes. The user can assign symbol types to data items interactively. Individual map symbols can be labeled with a number or the deposit name. Genmap also provides several geographic boundary file and window options.

  11. A survey of GPU-based medical image computing techniques

    PubMed Central

    Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming

    2012-01-01

    Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080

  12. The Induction of Chaos in Electronic Circuits Final Report-October 1, 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R.M.Wheat, Jr.

    2003-04-01

    This project, now known by the name ''Chaos in Electronic Circuits,'' was originally tasked as a two-year project to examine various ''fault'' or ''non-normal'' operational states of common electronic circuits with some focus on determining the feasibility of exploiting these states. Efforts over the two-year duration of this project have been dominated by the study of the chaotic behavior of electronic circuits. These efforts have included setting up laboratory space and hardware for conducting laboratory tests and experiments, acquiring and developing computer simulation and analysis capabilities, conducting literature surveys, developing test circuitry and computer models to exercise and test ourmore » capabilities, and experimenting with and studying the use of RF injection as a means of inducing chaotic behavior in electronics. An extensive array of nonlinear time series analysis tools have been developed and integrated into a package named ''After Acquisition'' (AA), including capabilities such as Delayed Coordinate Embedding Mapping (DCEM), Time Resolved (3-D) Fourier Transform, and several other phase space re-creation methods. Many computer models have been developed for Spice and for the ATP (Alternative Transients Program), modeling the several working circuits that have been developed for use in the laboratory. And finally, methods of induction of chaos in electronic circuits have been explored.« less

  13. High level language-based robotic control system

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Inventor); Kruetz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)

    1994-01-01

    This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

  14. High level language-based robotic control system

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Inventor); Kreutz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)

    1996-01-01

    This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.

  15. Automatic specification of reliability models for fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1993-01-01

    The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.

  16. A study of mapping exogenous knowledge representations into CONFIG

    NASA Technical Reports Server (NTRS)

    Mayfield, Blayne E.

    1992-01-01

    Qualitative reasoning is reasoning with a small set of qualitative values that is an abstraction of a larger and perhaps infinite set of quantitative values. The use of qualitative and quantitative reasoning together holds great promise for performance improvement in applications that suffer from large and/or imprecise knowledge domains. Included among these applications are the modeling, simulation, analysis, and fault diagnosis of physical systems. Several research groups continue to discover and experiment with new qualitative representations and reasoning techniques. However, due to the diversity of these techniques, it is difficult for the programs produced to exchange system models easily. The availability of mappings to transform knowledge from the form used by one of these programs to that used by another would open the doors for comparative analysis of these programs in areas such as completeness, correctness, and performance. A group at the Johnson Space Center (JSC) is working to develop CONFIG, a prototype qualitative modeling, simulation, and analysis tool for fault diagnosis applications in the U.S. space program. The availability of knowledge mappings from the programs produced by other research groups to CONFIG may provide savings in CONFIG's development costs and time, and may improve CONFIG's performance. The study of such mappings is the purpose of the research described in this paper. Two other research groups that have worked with the JSC group in the past are the Northwest University Group and the University of Texas at Austin Group. The former has produced a qualitative reasoning tool named SIMGEN, and the latter has produced one named QSIM. Another program produced by the Austin group is CC, a preprocessor that permits users to develop input for eventual use by QSIM, but in a more natural format. CONFIG and CC are both based on a component-connection ontology, so a mapping from CC's knowledge representation to CONFIG's knowledge representation was chosen as the focus of this study. A mapping from CC to CONFIG was developed. Due to differences between the two programs, however, the mapping transforms some of the CC knowledge to CONFIG as documentation rather than as knowledge in a form useful to computation. The study suggests that it may be worthwhile to pursue the mappings further. By implementing the mapping as a program, actual comparisons of computational efficiency and quality of results can be made between the QSIM and CONFIG programs. A secondary study may reveal that the results of the two programs augment one another, contradict one another, or differ only slightly. If the latter, the qualitative reasoning techniques may be compared in other areas, such as computational efficiency.

  17. Java Performance for Scientific Applications on LLNL Computer Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapfer, C; Wissink, A

    2002-05-10

    Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part ofmore » the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.« less

  18. A Structural Weight Estimation Program (SWEEP) for Aircraft. Volume 4 - Material Properties, Structure Temperature, Flutter and Fatigue

    DTIC Science & Technology

    1974-06-01

    NAME AND ADDRESS Deputy for Development Planning Air Force Systems Command Wright-Patterson Air Force Base, Ohio READ INSTRUCTIONS BEFORE...6600 computer. Two stand-alone pro- grams operating within 100,000 octal units were also developed to provide optional data sources for SWEEP...JAMES H. HALL, Colonel, USAF Deputy for Development Planning ll jgaajaaMteaäiiaaBiiMiffliiäffliiteMä hi*^*Mi*^^*^&äitküli^ riMMiniiiMfWitii

  19. Atmospheric Photochemical Modeling of Turbine Engine Fuels and Exhausts. Phase 2. Computer Model Development. Volume 2

    DTIC Science & Technology

    1988-05-01

    represented name Emitted Organics Included in All Models CO Carbon Monoxide C:C, Ethene HCHO Formaldehyde CCHO Acetaldehyde RCHO Propionaldehyde and other...of species in the mixture, and for proper use of this program, these files should be "normalized," i.e., the number of carbons in the mixture should...scenario in memory. Valid parmtypes are SCEN, PHYS, CHEM, VP, NSP, OUTP, SCHEDS. LIST ALLCOMP Lists all available composition filenames. LIST ALLSCE

  20. HECWRC, Flood Flow Frequency Analysis Computer Program 723-X6-L7550

    DTIC Science & Technology

    1989-02-14

    AGENCY NAME AND ADDRESS, ORDER NO., ETC. (1 NTS sells, leave blank) 11. PRICE INFORMA-ION Price includes documentation: Price code: DO1 $50.00 12 ...required is 256 K. Math coprocessor (8087/80287/80387) is highly recommended but not required. 16. DATA FILE TECHNICAL DESCRIPTION The software is...disk drive (360 KB or 1.2 MB). A 10 MB or larger hard disk is recommended. Math coprocessor (8087/80287/80387) is highly recommended but not renuired

  1. Mechanotransduction and the Cytoskeleton

    NASA Technical Reports Server (NTRS)

    Pickard, Barbara G.

    1996-01-01

    Two intellectual developments in the lab suggested a more powerful approach to the problem defined in the original statement. These were discussed with the "Program Chief', who agreed that an alteration of approach was desirable. Second expansion of our work was on the computational optical sectioning microscope (COSM). Additionally, we hoped to visualize the channels with respect to cytoskeletal entities. We have identified major here-to-fore unknown cytoskeletal structure in our representative experimental system, the onion epidermal cell, and have named this structure the endomembrane grant.

  2. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  3. GENPLOT: A formula-based Pascal program for data manipulation and plotting

    NASA Astrophysics Data System (ADS)

    Kramer, Matthew J.

    Geochemical processes involving alteration, differentiation, fractionation, or migration of elements may be elucidated by a number of discrimination or variation diagrams (e.g., AFM, Harker, Pearce, and many others). The construction of these diagrams involves arithmetic combination of selective elements (involving major, minor, or trace elements). GENPLOT utilizes a formula-based algorithm (an expression parser) which enables the program to manipulate multiparameter databases and plot XY, ternary, tetrahedron, and REE type plots without needing to change either the source code or rearranging databases. Formulae may be any quadratic expression whose variables are the column headings of the data matrix. A full-screen editor with limited equations and arithmetic functions (spreadsheet) has been incorporated into the program to aid data entry and editing. Data are stored as ASCII files to facilitate interchange of data between other programs and computers. GENPLOT was developed in Turbo Pascal for the IBM and compatible computers but also is available in Apple Pascal for the Apple Ile and Ill. Because the source code is too extensive to list here (about 5200 lines of Pascal code), the expression parsing routine, which is central to GENPLOT's flexibility is incorporated into a smaller demonstration program named SOLVE. The following paper includes a discussion on how the expression parser works and a detailed description of GENPLOT's capabilities.

  4. A MATLAB-based graphical user interface program for computing functionals of the geopotential up to ultra-high degrees and orders

    NASA Astrophysics Data System (ADS)

    Bucha, Blažej; Janák, Juraj

    2013-07-01

    We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.

  5. Cpu/gpu Computing for AN Implicit Multi-Block Compressible Navier-Stokes Solver on Heterogeneous Platform

    NASA Astrophysics Data System (ADS)

    Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin

    2016-06-01

    CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.

  6. The EPA Comptox Chemistry Dashboard: A Web-Based Data ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data but recent developments have focused on the development of a new software architecture that assembles the resources into a single platform. A new web application, the CompTox Chemistry Dashboard provides access to data associated with ~720,000 chemical substances. These data include experimental and predicted physicochemical property data, bioassay screening data associated with the ToxCast program, product and functional use information and a myriad of related data of value to environmental scientists. The dashboard provides chemical-based searching based on chemical names, synonyms and CAS Registry Numbers. Flexible search capabilities allow for chemical identificati

  7. Methods for identifying SNP interactions: a review on variations of Logic Regression, Random Forest and Bayesian logistic regression.

    PubMed

    Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula

    2011-01-01

    Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.

  8. The virtual asthma guideline e-learning program: learning effectiveness and user satisfaction.

    PubMed

    Kang, Sung-Yoon; Kim, Sae-Hoon; Kwon, Yong-Eun; Kim, Tae-Bum; Park, Hye-Kyung; Park, Heung-Woo; Chang, Yoon-Seok; Jee, Young-Koo; Moon, Hee-Bom; Min, Kyung-Up; Cho, Sang-Heon

    2018-05-01

    Effective educational tools are important for increasing adherence to asthma guidelines and clinical improvement of asthma patients. We developed a computer-based interactive education program for asthma guideline named the Virtual Learning Center for Asthma Management (VLCAM). We evaluated the usefulness of program in terms of its effects on user awareness of asthma guideline and level of satisfaction. Physicians-in-training at tertiary hospitals in Korea were enrolled in a cross-sectional questionnaire survey. The e-learning program on asthma guideline was conducted over a 2-week period. We investigated changes in the awareness of asthma guideline using 35-item self-administered questionnaire aiming at assessing physicians' knowledge, attitude, and practice. Satisfaction with the program was scored on 4-point Likert scales. A total of 158 physicians-in-training at six tertiary hospitals completed the survey. Compared with baseline, the overall awareness obtained from the scores of knowledge, attitude, and practice was improved significantly. Participants were satisfied with the VLCAM program in the following aspects: helpfulness, convenience, motivation, effectiveness, physicians' confidence, improvement of asthma management, and willingness to recommend. All items in user satisfaction questionnaires received high scores over 3 points. Moreover, the problem-based learning with a virtual patient received the highest user satisfaction among all parts of the program. Our computer-based e-learning program is useful for improving awareness of asthma management. It could improve adherence to asthma guidelines and enhance the quality of asthma care.

  9. Exemplary Academic Programs at the Community College. Volume I.

    ERIC Educational Resources Information Center

    Bazer, Gerald, Ed.

    Brief descriptions are provided of 54 community college programs identified as outstanding by the National Council of Instructional Administrators. Organized alphabetically by program title, the descriptions include the name of the college president, the name of a contact person, and the name, address, and telephone number of the college. The…

  10. A universal harm-minimisation approach to preventing psychostimulant and cannabis use in adolescents: a cluster randomised controlled trial

    PubMed Central

    2014-01-01

    Background Psychostimulants and cannabis are two of the three most commonly used illicit drugs by young Australians. As such, it is important to deliver prevention for these substances to prevent their misuse and to reduce associated harms. The present study aims to evaluate the feasibility and effectiveness of the universal computer-based Climate Schools: Psychostimulant and Cannabis Module. Methods A cluster randomised controlled trial was conducted with 1734 Year 10 students (mean age = 15.44 years; SD = 0.41) from 21 secondary schools in Australia. Schools were randomised to receive either the six lesson computer-based Climate Schools program or their usual health classes, including drug education, over the year. Results The Climate Schools program was shown to increase knowledge of cannabis and psychostimulants and decrease pro-drug attitudes. In the short-term the program was effective in subduing the uptake and plateauing the frequency of ecstasy use, however there were no changes in meth/amphetamine use. In addition, females who received the program used cannabis significantly less frequently than students who received drug education as usual. Finally, the Climate Schools program was related to decreasing students’ intentions to use meth/amphetamine and ecstasy in the future, however these effects did not last over time. Conclusions These findings provide support for the use of a harm-minimisation approach and computer technology as an innovative platform for the delivery of prevention education for illicit drugs in schools. The current study indicated that teachers and students enjoyed the program and that it is feasible to extend the successful Climate Schools model to the prevention of other drugs, namely cannabis and psychostimulants. Trial registration Australian and New Zealand Clinical Trials Registry ACTRN12613000492752. PMID:24943829

  11. A universal harm-minimisation approach to preventing psychostimulant and cannabis use in adolescents: a cluster randomised controlled trial.

    PubMed

    Vogl, Laura Elise; Newton, Nicola Clare; Champion, Katrina Elizabeth; Teesson, Maree

    2014-06-18

    Psychostimulants and cannabis are two of the three most commonly used illicit drugs by young Australians. As such, it is important to deliver prevention for these substances to prevent their misuse and to reduce associated harms. The present study aims to evaluate the feasibility and effectiveness of the universal computer-based Climate Schools: Psychostimulant and Cannabis Module. A cluster randomised controlled trial was conducted with 1734 Year 10 students (mean age = 15.44 years; SD = 0.41) from 21 secondary schools in Australia. Schools were randomised to receive either the six lesson computer-based Climate Schools program or their usual health classes, including drug education, over the year. The Climate Schools program was shown to increase knowledge of cannabis and psychostimulants and decrease pro-drug attitudes. In the short-term the program was effective in subduing the uptake and plateauing the frequency of ecstasy use, however there were no changes in meth/amphetamine use. In addition, females who received the program used cannabis significantly less frequently than students who received drug education as usual. Finally, the Climate Schools program was related to decreasing students' intentions to use meth/amphetamine and ecstasy in the future, however these effects did not last over time. These findings provide support for the use of a harm-minimisation approach and computer technology as an innovative platform for the delivery of prevention education for illicit drugs in schools. The current study indicated that teachers and students enjoyed the program and that it is feasible to extend the successful Climate Schools model to the prevention of other drugs, namely cannabis and psychostimulants. Australian and New Zealand Clinical Trials Registry ACTRN12613000492752.

  12. Innovations in Continuing Education. 1981 Award-Winning New Programs.

    ERIC Educational Resources Information Center

    National Univ. Continuing Education Association, Washington, DC.

    Descriptions are provided of the six programs selected as award-winning innovations on the basis of universal application and potential for greatest impact for the improvement of continuing education. Each description contains this information: program name, name of principal person, name and institution to whom award would be made, source of…

  13. Vectorization with SIMD extensions speeds up reconstruction in electron tomography.

    PubMed

    Agulleiro, J I; Garzón, E M; García, I; Fernández, J J

    2010-06-01

    Electron tomography allows structural studies of cellular structures at molecular detail. Large 3D reconstructions are needed to meet the resolution requirements. The processing time to compute these large volumes may be considerable and so, high performance computing techniques have been used traditionally. This work presents a vector approach to tomographic reconstruction that relies on the exploitation of the SIMD extensions available in modern processors in combination to other single processor optimization techniques. This approach succeeds in producing full resolution tomograms with an important reduction in processing time, as evaluated with the most common reconstruction algorithms, namely WBP and SIRT. The main advantage stems from the fact that this approach is to be run on standard computers without the need of specialized hardware, which facilitates the development, use and management of programs. Future trends in processor design open excellent opportunities for vector processing with processor's SIMD extensions in the field of 3D electron microscopy.

  14. Computational aeroacoustics and numerical simulation of supersonic jets

    NASA Technical Reports Server (NTRS)

    Morris, Philip J.; Long, Lyle N.

    1996-01-01

    The research project has been a computational study of computational aeroacoustics algorithms and numerical simulations of the flow and noise of supersonic jets. During this study a new method for the implementation of solid wall boundary conditions for complex geometries in three dimensions has been developed. In addition, a detailed study of the simulation of the flow in and noise from supersonic circular and rectangular jets has been conducted. Extensive comparisons have been made with experimental measurements. A summary of the results of the research program are attached as the main body of this report in the form of two publications. Also, the report lists the names of the students who were supported by this grant, their degrees, and the titles of their dissertations. In addition, a list of presentations and publications made by the Principal Investigators and the research students is also included.

  15. Geological and geochemical aspects of uranium deposits. A selected, annotated bibliography. Vol. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, M.B.; Garland, P.A.

    1977-10-01

    This bibliography was compiled by selecting 580 references from the Bibliographic Information Data Base of the Department of Energy's (DOE) National Uranium Resource Evaluation (NURE) Program. This data base and five others have been created by the Ecological Sciences Information Center to provide technical computer-retrievable data on various aspects of the nation's uranium resources. All fields of uranium geology are within the defined scope of the project, as are aerial surveying procedures, uranium reserves and resources, and universally applied uranium research. References used by DOE-NURE contractors in completing their aerial reconnaissance survey reports have been included at the request ofmore » the Grand Junction Office, DOE. The following indexes are provided to aid the user in locating reference of interest: author, keyword, geographic location, quadrangle name, geoformational index, and taxonomic name.« less

  16. Toward Affordable Systems II: Portfolio Management for Army Science and Technology Programs Under Uncertainties

    DTIC Science & Technology

    2011-01-01

    5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Rand Corporation ,Arroyo Center,PO Box...2138, 1776 Main Street,Santa Monica,CA,90407-2138 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES...research, development, test , and evaluation programs; and those who are interested in the optimal allocation of funds among different programs and/or

  17. XAFS Data Interchange: A single spectrum XAFS data file format.

    PubMed

    Ravel, B; Newville, M

    We propose a standard data format for the interchange of XAFS data. The XAFS Data Interchange (XDI) standard is meant to encapsulate a single spectrum of XAFS along with relevant metadata. XDI is a text-based format with a simple syntax which clearly delineates metadata from the data table in a way that is easily interpreted both by a computer and by a human. The metadata header is inspired by the format of an electronic mail header, representing metadata names and values as an associative array. The data table is represented as columns of numbers. This format can be imported as is into most existing XAFS data analysis, spreadsheet, or data visualization programs. Along with a specification and a dictionary of metadata types, we provide an application-programming interface written in C and bindings for programming dynamic languages.

  18. XAFS Data Interchange: A single spectrum XAFS data file format

    NASA Astrophysics Data System (ADS)

    Ravel, B.; Newville, M.

    2016-05-01

    We propose a standard data format for the interchange of XAFS data. The XAFS Data Interchange (XDI) standard is meant to encapsulate a single spectrum of XAFS along with relevant metadata. XDI is a text-based format with a simple syntax which clearly delineates metadata from the data table in a way that is easily interpreted both by a computer and by a human. The metadata header is inspired by the format of an electronic mail header, representing metadata names and values as an associative array. The data table is represented as columns of numbers. This format can be imported as is into most existing XAFS data analysis, spreadsheet, or data visualization programs. Along with a specification and a dictionary of metadata types, we provide an application-programming interface written in C and bindings for programming dynamic languages.

  19. Active Reconfigurable Metamaterial Unit Cell Based on Non-Foster Elements

    DTIC Science & Technology

    2013-10-01

    Krois Ivan Bonic Aleksandar Kiricenko Damir Muha University of Zagreb Faculty of Electrical Engineering and Computing Unksa 3 Zagreb ...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Zagreb Faculty of Electrical Engineering and Computing Unksa 3 Zagreb , HR-10000 CROATIA 8...Electrical Engineering and Computing University of Zagreb Unska 3 Zagreb , HR-10000, Croatia 14 October 2013 Distribution A: Approved for

  20. Question Generation via Overgenerating Transformations and Ranking

    DTIC Science & Technology

    2009-01-01

    School of Computer Science Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA 15213 www.lti.cs.cmu.edu c©2009, Michael Heilman and Noah A...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Carnegie Mellon University ,School of Computer Science,5000 Forbes Ave,Pittsburgh,PA,15213 8...1967), in particular those that view a question as a transformation of a canonical declarative sentence ( Chomsky , 1973). In computational linguistics

  1. Footstep Planning on Uneven Terrain with Mixed-Integer Convex Optimization

    DTIC Science & Technology

    2014-08-01

    ORGANIZATION NAME(S) AND ADDRESS(ES) Massachusetts Institute of Technology,Computer Science and Artificial Intellegence Laboratory,Cambridge,MA,02139...the MIT Energy Initiative, MIT CSAIL, and the DARPA Robotics Challenge. 1Robin Deits is with the Computer Science and Artificial Intelligence Laboratory

  2. Development of New Generation of Multibody System Computer Software

    DTIC Science & Technology

    2012-04-12

    DEVELOPMENT OF NEW GENERATION OF MULTIBODY SYSTEM COMPUTER SOFTWARE Ahmed A. Shabana University of Illinois at Chicago Paramsothy Jayakumar ...Paramsothy Jayakumar ; Michael Letherwood 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  3. Integration of the Execution Support System for the Computer-Aided Prototyping System (CAPS)

    DTIC Science & Technology

    1990-09-01

    SUPPORT SYSTEM FOR THE COMPUTER -AIDED PROTOTYPING SYSTEM (CAPS) by Frank V. Palazzo September 1990 Thesis Advisor: Luq± Approved for public release...ZATON REPOR ,,.VBE (, 6a NAME OF PERPORMING ORGAN ZAT7ON 6b OFF:CE SYVBOL 7a NAME OF MONITORINC O0-CA’Za- ON Computer Science Department (if applicable...Include Security Classification) Integration of the Execution Support System for the Computer -Aided Prototyping System (C S) 12 PERSONAL AUTHOR(S) Frank V

  4. Update of Aircraft Profile Data for the Integrated Noise Model Computer Program. Volume 2. Appendix A: Aircraft Takeoff and Landing Profiles

    DTIC Science & Technology

    1992-03-01

    8 KT) 02- 10 -1992 09: 48 :32 AIRCRAFT ID AIRCRAFT AND ENGINE AIRCRAFT NUMBER NAMES CATEGORY ------------------- ------------------- -------- 003...MAX CLIMB 8 CLIMB ZErO MAX CLIMB 9 CLIMB ZERO MAX CLIMB A-21 TAKEOFF PROFILE DATA (HEADWIND = 8 KT) 02- 10 -1992 09: 48 :36 AIRCRAFT AIRCRAFT AND ENGINE...CLIMB ZERO USR SUPPL 34033 LB 10 CLIMB ZERO USR SUPPL 34798 LB A-194 TAKEOFF PROFILE DATA (HEADWIND = 8 KT) 06-24-1991 10 :33: 48 AIRCRAFT AIRCRAFT

  5. DECOMP: a PDB decomposition tool on the web.

    PubMed

    Ordog, Rafael; Szabadka, Zoltán; Grolmusz, Vince

    2009-07-27

    The protein databank (PDB) contains high quality structural data for computational structural biology investigations. We have earlier described a fast tool (the decomp_pdb tool) for identifying and marking missing atoms and residues in PDB files. The tool also automatically decomposes PDB entries into separate files describing ligands and polypeptide chains. Here, we describe a web interface named DECOMP for the tool. Our program correctly identifies multi-monomer ligands, and the server also offers the preprocessed ligand-protein decomposition of the complete PDB for downloading (up to size: 5GB) AVAILABILITY: http://decomp.pitgroup.org.

  6. The Effects of the Uncertainty of Thermodynamic and Kinetic Properties on Nucleation and Evolution Kinetics of Cr-Rich Phase in Fe-Cr Alloys

    DTIC Science & Technology

    2012-12-01

    M. A.; Horstemeyer, M. F.; Gao, F.; Sun, X.: Khaleel, M. Scripta Materialia. 2011, 64, 908. 80. Plimpton , S . Journal of Computational Physics...99 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) Mark Tschopp,* Fei Gao,** and Xin Sun** 5d. PROJECT NUMBER 5e. TASK NUMBER...5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) U.S. Army Research Laboratory ATTN: RDRL-WMM-F Aberdeen Proving Ground

  7. Planar Inlet Design and Analysis Process (PINDAP)

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Gruber, Christopher R.

    2005-01-01

    The Planar Inlet Design and Analysis Process (PINDAP) is a collection of software tools that allow the efficient aerodynamic design and analysis of planar (two-dimensional and axisymmetric) inlets. The aerodynamic analysis is performed using the Wind-US computational fluid dynamics (CFD) program. A major element in PINDAP is a Fortran 90 code named PINDAP that can establish the parametric design of the inlet and efficiently model the geometry and generate the grid for CFD analysis with design changes to those parameters. The use of PINDAP is demonstrated for subsonic, supersonic, and hypersonic inlets.

  8. An assembler for the MOS Technology 6502 microprocessor as implemented in jolt (TM) and KIM-1 (TM)

    NASA Technical Reports Server (NTRS)

    Lilley, R. W.

    1976-01-01

    Design of low-cost, microcomputer-based navigation receivers, and the assembler are described. The development of computer software for microprocessors is materially aided by the assembler program using mnemonic variable names. The flexibility of the environment provided by the IBM's Virtual Machine Facility and the Conversational Monitor System, make possible the convenient assembler access. The implementation of the assembler for the microprocessor chip serves a part of the present need and forms a model for support of other microprocessors.

  9. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  10. 76 FR 22682 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-22

    ...: Maintained in file folders and computer storage media. Retrievability: Retrieved by name and/or Social... folders and computer storage media.'' * * * * * System Manager(s) and address: Delete entry and replace... provide their full name, Social Security Number (SSN), any details which may assist in locating records...

  11. 76 FR 62813 - Pilot Program To Evaluate Proposed Proprietary Name Submissions; Public Meeting on Pilot Program...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-11

    ...] Pilot Program To Evaluate Proposed Proprietary Name Submissions; Public Meeting on Pilot Program Results... voluntary pilot program that enabled participating pharmaceutical firms to evaluate proposed proprietary... public meeting at the end of fiscal year 2011 to discuss the results of the pilot program, but the Agency...

  12. 77 FR 23491 - Notice of Submission of Proposed Information Collection to OMB; Continuum of Care Homeless...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-19

    ... Program, and changed to match the new inclusive program name created through the HEARTH Act. DATES... Occupancy Program, and changed to match the new inclusive program name created through the HEARTH Act...

  13. Network Information System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    1996-05-01

    The Network Information System (NWIS) was initially implemented in May 1996 as a system in which computing devices could be recorded so that unique names could be generated for each device. Since then the system has grown to be an enterprise wide information system which is integrated with other systems to provide the seamless flow of data through the enterprise. The system Iracks data for two main entities: people and computing devices. The following are the type of functions performed by NWIS for these two entities: People Provides source information to the enterprise person data repository for select contractors andmore » visitors Generates and tracks unique usernames and Unix user IDs for every individual granted cyber access Tracks accounts for centrally managed computing resources, and monitors and controls the reauthorization of the accounts in accordance with the DOE mandated interval Computing Devices Generates unique names for all computing devices registered in the system Tracks the following information for each computing device: manufacturer, make, model, Sandia property number, vendor serial number, operating system and operating system version, owner, device location, amount of memory, amount of disk space, and level of support provided for the machine Tracks the hardware address for network cards Tracks the P address registered to computing devices along with the canonical and alias names for each address Updates the Dynamic Domain Name Service (DDNS) for canonical and alias names Creates the configuration files for DHCP to control the DHCP ranges and allow access to only properly registered computers Tracks and monitors classified security plans for stand-alone computers Tracks the configuration requirements used to setup the machine Tracks the roles people have on machines (system administrator, administrative access, user, etc...) Allows systems administrators to track changes made on the machine (both hardware and software) Generates an adjustment history of changes on selected fields« less

  14. Generating and using truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2012-01-01

    The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.

  15. MsSpec-1.0: A multiple scattering package for electron spectroscopies in material science

    NASA Astrophysics Data System (ADS)

    Sébilleau, Didier; Natoli, Calogero; Gavaza, George M.; Zhao, Haifeng; Da Pieve, Fabiana; Hatada, Keisuke

    2011-12-01

    We present a multiple scattering package to calculate the cross-section of various spectroscopies namely photoelectron diffraction (PED), Auger electron diffraction (AED), X-ray absorption (XAS), low-energy electron diffraction (LEED) and Auger photoelectron coincidence spectroscopy (APECS). This package is composed of three main codes, computing respectively the cluster, the potential and the cross-section. In the latter case, in order to cover a range of energies as wide as possible, three different algorithms are provided to perform the multiple scattering calculation: full matrix inversion, series expansion or correlation expansion of the multiple scattering matrix. Numerous other small Fortran codes or bash/csh shell scripts are also provided to perform specific tasks. The cross-section code is built by the user from a library of subroutines using a makefile. Program summaryProgram title: MsSpec-1.0 Catalogue identifier: AEJT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 504 438 No. of bytes in distributed program, including test data, etc.: 14 448 180 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any Operating system: Linux, MacOs RAM: Bytes Classification: 7.2 External routines: Lapack ( http://www.netlib.org/lapack/) Nature of problem: Calculation of the cross-section of various spectroscopies. Solution method: Multiple scattering. Running time: The test runs provided only take a few seconds to run.

  16. Measurement and Analysis of P2P IPTV Program Resource

    PubMed Central

    Chen, Xingshu; Wang, Haizhou; Zhang, Qi

    2014-01-01

    With the rapid development of P2P technology, P2P IPTV applications have received more and more attention. And program resource distribution is very important to P2P IPTV applications. In order to collect IPTV program resources, a distributed multi-protocol crawler is proposed. And the crawler has collected more than 13 million pieces of information of IPTV programs from 2009 to 2012. In addition, the distribution of IPTV programs is independent and incompact, resulting in chaos of program names, which obstructs searching and organizing programs. Thus, we focus on characteristic analysis of program resources, including the distributions of length of program names, the entropy of the character types, and hierarchy depth of programs. These analyses reveal the disorderly naming conventions of P2P IPTV programs. The analysis results can help to purify and extract useful information from chaotic names for better retrieval and accelerate automatic sorting of program and establishment of IPTV repository. In order to represent popularity of programs and to predict user behavior and popularity of hot programs over a period, we also put forward an analytical model of hot programs. PMID:24772008

  17. 7 CFR 1209.40 - Programs, plans, and projects.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... requirements for quality control, grade standards, supply management programs, or other programs that would..., plan, or project, no reference to a brand name, trade name, or State or regional identification of any...

  18. A systemic approach for modeling biological evolution using Parallel DEVS.

    PubMed

    Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo

    2015-08-01

    A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. A high performance system for molecular dynamics simulation of biomolecules using a special-purpose computer.

    PubMed

    Komeiji, Y; Yokoyama, H; Uebayasi, M; Taiji, M; Fukushige, T; Sugimoto, D; Takata, R; Shimizu, A; Itsukashi, K

    1996-01-01

    GRAPE (GRavity PipE) processors are special purpose computers for simulation of classical particles. The performance of MD-GRAPE, one of the GRAPEs developed for molecular dynamics, was investigated. The effective speed of MD-GRAPE was equivalent to approximately 6 Gflops. The precision of MD-GRAPE was good judging from the acceptable fluctuation of the total energy. Then a software named PEACH (Program for Energetic Analysis of bioCHemical molecules) was developed for molecular dynamics of biomolecules in combination with MD-GRAPE. Molecular dynamics simulation was performed for several protein-solvent systems with different sizes. Simulation of the largest system investigated (27,000 atoms) took only 5 sec/step. Thus, the PEACH-GRAPE system is expected to be useful in accurate and reliable simulation of large biomolecules.

  20. A GPU-paralleled implementation of an enhanced face recognition algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Liu, Xiyang; Shao, Shuai; Zan, Jiguo

    2013-03-01

    Face recognition algorithm based on compressed sensing and sparse representation is hotly argued in these years. The scheme of this algorithm increases recognition rate as well as anti-noise capability. However, the computational cost is expensive and has become a main restricting factor for real world applications. In this paper, we introduce a GPU-accelerated hybrid variant of face recognition algorithm named parallel face recognition algorithm (pFRA). We describe here how to carry out parallel optimization design to take full advantage of many-core structure of a GPU. The pFRA is tested and compared with several other implementations under different data sample size. Finally, Our pFRA, implemented with NVIDIA GPU and Computer Unified Device Architecture (CUDA) programming model, achieves a significant speedup over the traditional CPU implementations.

  1. [Software for performing a global phenotypic and genotypic nutritional assessment].

    PubMed

    García de Diego, L; Cuervo, M; Martínez, J A

    2013-01-01

    The nutritional assessment of a patient needs the simultaneous managing a extensive information and a great number of databases, as both aspects of the process of nutrition and the clinical situation of the patient are analyzed. The introduction of computers in the nutritional area constitutes an extraordinary advance in the administration of nutrition information, providing a complete assessment of nutritional aspects in a quick and easy way. To develop a computer program that can be used as a tool for assessing the nutritional status of the patient, the education of clinical staff, for epidemiological studies and for educational purposes. Based on a computer program which assists the health specialist to perform a full nutritional evaluation of the patient, through the registration and assessment of the phenotypic and genotypic features. The application provides nutritional prognosis based on anthropometric and biochemical parameters, images of states of malnutrition, questionnaires to characterize diseases, diagnostic criteria, identification of alleles associated with the development of specific metabolic illnesses and questionnaires of quality of life, for a custom actuation. The program includes, as part of the nutritional assessment of the patient, food intake analysis, design of diets and promotion of physical activity, introducing food frequency questionnaires, dietary recalls, healthy eating indexes, model diets, fitness tests, and recommendations, recalls and questionnaires of physical activity. A computer program performed under Java Swing, using SQLite database and some external libraries such as JfreeChart for plotting graphs. This brand new designed software is composed of five blocks categorized into ten modules named: Patients, Anthropometry, Clinical History, Biochemistry, Dietary History, Diagnostic (with genetic make up), Quality of life, Physical activity, Energy expenditure and Diets. Each module has a specific function which evaluates a different aspect of the nutritional status of the patient. UNyDIET is a global computer program, customized and upgradeable, easy to use and versatile, aimed to health specialists, medical staff, dietitians, nutritionists, scientists and educators. This tool can be used as a working instrument in programs promoting health, nutritional and clinical assessments as well as in the evaluation of health care quality, in epidemiological studies, in nutrition intervention programs and teaching. Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.

  2. Computer program for analysis of hemodynamic response to head-up tilt test

    NASA Astrophysics Data System (ADS)

    ŚwiÄ tek, Eliza; Cybulski, Gerard; Koźluk, Edward; PiÄ tkowska, Agnieszka; Niewiadomski, Wiktor

    2014-11-01

    The aim of this work was to create a computer program, written in the MATLAB environment, which enables the visualization and analysis of hemodynamic parameters recorded during a passive tilt test using the CNS Task Force Monitor System. The application was created to help in the assessment of the relationship between the values and dynamics of changes of the selected parameters and the risk of orthostatic syncope. The signal analysis included: R-R intervals (RRI), heart rate (HR), systolic blood pressure (sBP), diastolic blood pressure (dBP), mean blood pressure (mBP), stroke volume (SV), stroke index (SI), cardiac output (CO), cardiac index (CI), total peripheral resistance (TPR), total peripheral resistance index (TPRI), ventricular ejection time (LVET) and thoracic fluid content (TFC). The program enables the user to visualize waveforms for a selected parameter and to perform smoothing with selected moving average parameters. It allows one to construct the graph of means for any range, and the Poincare plot for a selected time range. The program automatically determines the average value of the parameter before tilt, its minimum and maximum value immediately after changing positions and the times of their occurrence. It is possible to correct the automatically detected points manually. For the RR interval, it determines the acceleration index (AI) and the brake index (BI). It is possible to save calculated values to an XLS with a name specified by user. The application has a user-friendly graphical interface and can run on a computer that has no MATLAB software.

  3. A single network adaptive critic (SNAC) architecture for optimal control synthesis for a class of nonlinear systems.

    PubMed

    Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N

    2006-12-01

    Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.

  4. Cybersecurity: Utilizing Fusion Centers to Protect State, Local, Tribal, and Territorial Entities Against Cyber Threats

    DTIC Science & Technology

    2016-09-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9...state- and local-level computer networks fertile ground for the cyber adversary. This research focuses on the threat to SLTT computer networks and how...institutions, and banking systems. The array of responsibilities and the cybersecurity threat landscape make state- and local-level computer networks fertile

  5. Thermohydrodynamic analysis of cryogenic liquid turbulent flow fluid film bearings, phase 2

    NASA Technical Reports Server (NTRS)

    Sanandres, Luis

    1994-01-01

    The Phase 2 (1994) Annual Progress Report presents two major report sections describing the thermal analysis of tilting- and flexure-pad hybrid bearings, and the unsteady flow and transient response of a point mass rotor supported on fluid film bearings. A literature review on the subject of two-phase flow in fluid film bearings and part of the proposed work for 1995 are also included. The programs delivered at the end of 1994 are named hydroflext and hydrotran. Both codes are fully compatible with the hydrosealt (1993) program. The new programs retain the same calculating options of hydrosealt plus the added bearing geometries, and unsteady flow and transient forced response. Refer to the hydroflext & hydrotran User's Manual and Tutorial for basic information on the analysis and instructions to run the programs. The Examples Handbook contains the test bearing cases along with comparisons with experimental data or published analytical values. The following major tasks were completed in 1994 (Phase 2): (1) extension of the thermohydrodynamic analysis and development of computer program hydroflext to model various bearing geometries, namely, tilting-pad hydrodynamic journal bearings, flexure-pad cylindrical bearings (hydrostatic and hydrodynamic), and cylindrical pad bearings with a simple elastic matrix (ideal foil bearings); (2) improved thermal model including radial heat transfer through the bearing stator; (3) calculation of the unsteady bulk-flow field in fluid film bearings and the transient response of a point mass rotor supported on bearings; and (4) a literature review on the subject of two-phase flows and homogeneous-mixture flows in thin-film geometries.

  6. User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes

    USGS Publications Warehouse

    Klein, Fred W.

    2002-01-01

    Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of Hypoinverse includes all of the previous capabilities, but adds Y2000 formats to those defined earlier. In most cases, the new formats add 2 digits to the year field to accommodate the century. Other fields are sometimes rearranged or expanded to accommodate a better field order. The Y2000 formats are invoked with the “200” command. When the Y2000 flag is turned on, all files are read and written in the new format and there is no mixing of format types in a single run. Some formats without a date field, like station files, have not changed. A separate program called 2000CONV has been written to convert old formats to new. Other new features, like expanded station names, calculating amplitude magnitudes from a variety of digital seismometers, station history files, interactive earthquake processing, and locations from CUSP (Caltech USGS Seismic Processing) binary files have been added. General features. Hypoinverse will locate any number of events in an input file, which can be in one of several different formats. Any or all of printout, summary or archive output may be produced. Hypoinverse is driven by user commands. The various commands define input and output files, set adjustable parameters, and solve for locations of a file of earthquake data using the parameters and files currently set. It is both interactive and "batch" in that commands may be executed either from the keyboard or from a file. You execute the commands in a file by typing @filename at the Hypoinverse prompt. Users may either supply parameters on the command line, or omit them and are prompted interactively. The current parameter values are displayed and may be taken as defaults by pressing just the RETURN key after the prompt. This makes the program very easy to use, providing you can remember the names of the commands. Combining commands with and without their required parameters into a command file permits a variety of customized procedures such as automatic input of crustal model and station data, but prompting for a different phase file each time. All commands are 3 letters long and most require one or more parameters or file names. If they appear on a line with a command, character strings such as filenames must be enclosed in apostrophes (single quotes). Appendix 1 gives this and other free-format rules for supplying parameters, which are parsed in Fortran. When several parameters are required following a command, any of them may be omitted by replacing them with null fields (see appendix 1). A null field leaves that parameter unchanged from its current or default value. When you start HYPOINVERSE, default values are in effect for all parameters except file names. Hypoinverse is a complicated program with many features and options. Many of these "advanced" or seldom used features are documented here, but are more detailed than a typical user needs to read about when first starting with the program. I have put some of this material in smaller type so that a first time user can concentrate on the more important information.

  7. Psychology or Psychological Science?: A Survey of Graduate Psychology Faculty Regarding Program Names

    ERIC Educational Resources Information Center

    Collisson, Brian; Rusbasan, David

    2018-01-01

    The question of renaming graduate psychology programs to psychological science is a timely and contentious issue. To better understand why some programs, but not others, are changing names, we surveyed chairpersons (Study 1) and faculty (Study 2) within graduate psychology and psychological science programs. Within psychology programs, a name…

  8. Programs for Deaf-Blind Children and Adults.

    ERIC Educational Resources Information Center

    American Annals of the Deaf, 1995

    1995-01-01

    This report of the annual survey of programs for deaf-blind children and adults lists, by state, programs for deaf-blind children and youth, Helen Keller Centers for deaf-blind youth and adults, and programs for training teachers of deaf-blind students. Provided are program names, addresses, telephone numbers, and names of directors. (DB)

  9. Auto-tuning system for NMR probe with LabView

    NASA Astrophysics Data System (ADS)

    Quen, Carmen; Mateo, Olivia; Bernal, Oscar

    2013-03-01

    Typical manual NMR-tuning method is not suitable for broadband spectra spanning several megahertz linewidths. Among the main problems encountered during manual tuning are pulse-power reproducibility, baselines, and transmission line reflections, to name a few. We present a design of an auto-tuning system using graphic programming language, LabVIEW, to minimize these problems. The program is designed to analyze the detected power signal of an antenna near the NMR probe and use this analysis to automatically tune the sample coil to match the impedance of the spectrometer (50 Ω). The tuning capacitors of the probe are controlled by a stepper motor through a LabVIEW/computer interface. Our program calculates the area of the power signal as an indicator to control the motor so disconnecting the coil to tune it through a network analyzer is unnecessary. Work supported by NSF-DMR 1105380

  10. Auto-tuning for NMR probe using LabVIEW

    NASA Astrophysics Data System (ADS)

    Quen, Carmen; Pham, Stephanie; Bernal, Oscar

    2014-03-01

    Typical manual NMR-tuning method is not suitable for broadband spectra spanning several megahertz linewidths. Among the main problems encountered during manual tuning are pulse-power reproducibility, baselines, and transmission line reflections, to name a few. We present a design of an auto-tuning system using graphic programming language, LabVIEW, to minimize these problems. The program uses a simplified model of the NMR probe conditions near perfect tuning to mimic the tuning process and predict the position of the capacitor shafts needed to achieve the desirable impedance. The tuning capacitors of the probe are controlled by stepper motors through a LabVIEW/computer interface. Our program calculates the effective capacitance needed to tune the probe and provides controlling parameters to advance the motors in the right direction. The impedance reading of a network analyzer can be used to correct the model parameters in real time for feedback control.

  11. What can the programming language Rust do for astrophysics?

    NASA Astrophysics Data System (ADS)

    Blanco-Cuaresma, Sergi; Bolmont, Emeline

    2017-06-01

    The astrophysics community uses different tools for computational tasks such as complex systems simulations, radiative transfer calculations or big data. Programming languages like Fortran, C or C++ are commonly present in these tools and, generally, the language choice was made based on the need for performance. However, this comes at a cost: safety. For instance, a common source of error is the access to invalid memory regions, which produces random execution behaviors and affects the scientific interpretation of the results. In 2015, Mozilla Research released the first stable version of a new programming language named Rust. Many features make this new language attractive for the scientific community, it is open source and it guarantees memory safety while offering zero-cost abstraction. We explore the advantages and drawbacks of Rust for astrophysics by re-implementing the fundamental parts of Mercury-T, a Fortran code that simulates the dynamical and tidal evolution of multi-planet systems.

  12. An Overview: NASA LeRC Structures Programs

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.

    1998-01-01

    A workshop on National Structures Programs was held, jointly sponsored by the AIAA Structures Technical Committee, the University of Virginia's Center for Advanced Computational Technology and NASA. The Objectives of the Workshop were to: provide a forum for discussion of current Government-sponsored programs in the structures area; identify high potential research areas for future aerospace systems; and initiate suitable interaction mechanisms with the managers of structures programs. The presentations covered structures programs at NASA, DOD (AFOSR, ONR, ARO and DARPA), and DOE. This publication is the presentation of the Structures and Acoustics Division of the NASA Lewis Research Center. The Structures and Acoustics Division has its genesis dating back to 1943. It is responsible for NASA research related to rotating structures and structural hot sections of both airbreathing and rocket engines. The work of the division encompasses but is not limited to aeroelasticity, structural life prediction and reliability, fatigue and fracture, mechanical components such as bearings, gears, and seals, and aeroacoustics. These programs are discussed and the names of responsible individuals are provided for future reference.

  13. Distributed Name Servers: Naming and Caching in Large Distributed Computing Environments

    DTIC Science & Technology

    1985-12-01

    transmission rate of the communication medium1, transmission over a 56K bps line costs approx- imately 54r, and similarly, communication over a 9.6K...memories for modem computer systems attempt to maximize the hit ratio for a fixed-size cache by utilizing intelligent cache replacement algorithms

  14. Accurate construction of consensus genetic maps via integer linear programming.

    PubMed

    Wu, Yonghui; Close, Timothy J; Lonardi, Stefano

    2011-01-01

    We study the problem of merging genetic maps, when the individual genetic maps are given as directed acyclic graphs. The computational problem is to build a consensus map, which is a directed graph that includes and is consistent with all (or, the vast majority of) the markers in the input maps. However, when markers in the individual maps have ordering conflicts, the resulting consensus map will contain cycles. Here, we formulate the problem of resolving cycles in the context of a parsimonious paradigm that takes into account two types of errors that may be present in the input maps, namely, local reshuffles and global displacements. The resulting combinatorial optimization problem is, in turn, expressed as an integer linear program. A fast approximation algorithm is proposed, and an additional speedup heuristic is developed. Our algorithms were implemented in a software tool named MERGEMAP which is freely available for academic use. An extensive set of experiments shows that MERGEMAP consistently outperforms JOINMAP, which is the most popular tool currently available for this task, both in terms of accuracy and running time. MERGEMAP is available for download at http://www.cs.ucr.edu/~yonghui/mgmap.html.

  15. Programs for Deaf-Blind Children and Adults.

    ERIC Educational Resources Information Center

    American Annals of the Deaf, 1999

    1999-01-01

    This directory of programs for deaf-blind children and adults lists program name, address, telephone numbers, e-mail address, Web site, and administrator name. The directory also lists, with similar information, Helen Keller Centers for Deaf-Blind Youth and Adults, and programs for training teachers of deaf-blind students. (DB)

  16. Testing an automated method to estimate ground-water recharge from streamflow records

    USGS Publications Warehouse

    Rutledge, A.T.; Daniel, C.C.

    1994-01-01

    The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.

  17. IOPA: I/O-aware parallelism adaption for parallel programs

    PubMed Central

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236

  18. IOPA: I/O-aware parallelism adaption for parallel programs.

    PubMed

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.

  19. 75 FR 24970 - FBI Records Management Division National Name Check Program Section User Fees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-06

    ... Division National Name Check Program Section User Fees AGENCY: Federal Bureau of Investigation (FBI), Justice. ACTION: Notice. SUMMARY: This notice establishes the user fee schedule for federal agencies... user fees for federal agencies requesting noncriminal name-based background checks of the Central...

  20. Close-Call Action Log Form

    NASA Technical Reports Server (NTRS)

    Spuler, Linda M.; Ford, Patricia K.; Skeete, Darren C.; Hershman, Scot; Raviprakash, Pushpa; Arnold, John W.; Tran, Victor; Haenze, Mary Alice

    2005-01-01

    "Close Call Action Log Form" ("CCALF") is the name of both a computer program and a Web-based service provided by the program for creating an enhanced database of close calls (in the colloquial sense of mishaps that were avoided by small margins) assigned to the Center Operations Directorate (COD) at Johnson Space Center. CCALF provides a single facility for on-line collaborative review of close calls. Through CCALF, managers can delegate responses to employees. CCALF utilizes a pre-existing e-mail system to notify managers that there are close calls to review, but eliminates the need for the prior practices of passing multiple e-mail messages around the COD, then collecting and consolidating them into final responses: CCALF now collects comments from all responders for incorporation into reports that it generates. Also, whereas it was previously necessary to manually calculate metrics (e.g., numbers of maintenance-work orders necessitated by close calls) for inclusion in the reports, CCALF now computes the metrics, summarizes them, and displays them in graphical form. The reports and all pertinent information used to generate the reports are logged, tracked, and retained by CCALF for historical purposes.

  1. Application of a soft computing technique in predicting the percentage of shear force carried by walls in a rectangular channel with non-homogeneous roughness.

    PubMed

    Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein

    2016-01-01

    Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.

  2. Plasma separation process. Betacell (BCELL) code, user's manual

    NASA Astrophysics Data System (ADS)

    Taherzadeh, M.

    1987-11-01

    The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the Plasma Separation Program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison.

  3. Software for Displaying Data from Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Powell, Mark; Backers, Paul; Norris, Jeffrey; Vona, Marsette; Steinke, Robert

    2003-01-01

    Science Activity Planner (SAP) DownlinkBrowser is a computer program that assists in the visualization of processed telemetric data [principally images, image cubes (that is, multispectral images), and spectra] that have been transmitted to Earth from exploratory robotic vehicles (rovers) on remote planets. It is undergoing adaptation to (1) the Field Integrated Design and Operations (FIDO) rover (a prototype Mars-exploration rover operated on Earth as a test bed) and (2) the Mars Exploration Rover (MER) mission. This program has evolved from its predecessor - the Web Interface for Telescience (WITS) software - and surpasses WITS in the processing, organization, and plotting of data. SAP DownlinkBrowser creates Extensible Markup Language (XML) files that organize data files, on the basis of content, into a sortable, searchable product database, without the overhead of a relational database. The data-display components of SAP DownlinkBrowser (descriptively named ImageView, 3DView, OrbitalView, PanoramaView, ImageCubeView, and SpectrumView) are designed to run in a memory footprint of at least 256MB on computers that utilize the Windows, Linux, and Solaris operating systems.

  4. Akamai Internship Program

    DTIC Science & Technology

    2015-04-17

    AFRL-OSR-VA-TR-2015-0094 AKAMAI INTERNSHIP PROGRAM Lisa Hunter UNIVERSITY OF HAWAII SYSTEMS Final Report 04/17/2015 DISTRIBUTION A: Distribution...NUMBER n/a 5f. WORK UNIT NUMBER n/a 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Hawaii Systems 2530 Dole St. SAK D-200...Honolulu, HI 96822-2309 8. PERFORMING ORGANIZATION REPORT NUMBER n/a 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) University of

  5. A computer program incorporating Pitzer's equations for calculation of geochemical reactions in brines

    USGS Publications Warehouse

    Plummer, Niel; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.

    1988-01-01

    The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)

  6. An Interactive Software for Conceptual Wing Flutter Analysis and Parametric Study

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1996-01-01

    An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well-defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed for Macintosh or IBM compatible personal computers, on MathCad application software with integrated documentation, graphics, data base and symbolic mathematics. The analysis method was based on non-dimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The parametric plots were compiled in a Vought Corporation report from a vast data base of past experiments and wind-tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended-Wing-Body concept, proposed by McDonnell Douglas Corp. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.

  7. APPLE - An aeroelastic analysis system for turbomachines and propfans

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral

    1992-01-01

    This paper reviews aeroelastic analysis methods for propulsion elements (advanced propellers, compressors and turbines) being developed and used at NASA Lewis Research Center. These aeroelastic models include both structural and aerodynamic components. The structural models include the typical section model, the beam model with and without disk flexibility, and the finite element blade model with plate bending elements. The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation for a cascade to the three-dimensional Euler equations for multi-blade configurations. Typical results are presented for each aeroelastic model. Suggestions for further research are indicated. All the available aeroelastic models and analysis methods are being incorporated into a unified computer program named APPLE (Aeroelasticity Program for Propulsion at LEwis).

  8. 34 CFR 668.6 - Reporting and disclosure requirements for programs that prepare students for gainful employment...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Classification of Instructional Program (CIP) code of that program; and (C) If the student completed a program during the award year— (1) The name and CIP code of that program, and the date the student completed the... program, by name and CIP code, offered by the institution under § 668.8(c)(3) or (d), the total number of...

  9. 34 CFR 668.6 - Reporting and disclosure requirements for programs that prepare students for gainful employment...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Classification of Instructional Program (CIP) code of that program; and (C) If the student completed a program during the award year— (1) The name and CIP code of that program, and the date the student completed the... program, by name and CIP code, offered by the institution under § 668.8(c)(3) or (d), the total number of...

  10. 34 CFR 668.6 - Reporting and disclosure requirements for programs that prepare students for gainful employment...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Classification of Instructional Program (CIP) code of that program; and (C) If the student completed a program during the award year— (1) The name and CIP code of that program, and the date the student completed the... program, by name and CIP code, offered by the institution under § 668.8(c)(3) or (d), the total number of...

  11. 34 CFR 668.6 - Reporting and disclosure requirements for programs that prepare students for gainful employment...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Classification of Instructional Program (CIP) code of that program; and (C) If the student completed a program during the award year— (1) The name and CIP code of that program, and the date the student completed the... program, by name and CIP code, offered by the institution under § 668.8(c)(3) or (d), the total number of...

  12. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  13. Virtual university applied to telesurgery: from teleeducation to telemanipulation.

    PubMed

    Marescaux, J; Soler, L; Mutter, D; Leroy, J; Vix, M; Koehl, C; Clément, J M

    2000-01-01

    PROBLEM/BACKGROUND: In order to improve patient care by minimal invasive surgery (MIS), we perfected a Virtual TeleSurgical University that allows for teleeducation, teleconcertation, surgical planning and telemanipulation, through new Virtual Reality and multimedia systems. The organization of this innovative school was federated around three major research programs. First, the TESUS program focused on the teletransmission of medical information, allowing for videoconferencing around the world and telementoring. Next, the WeBS-Surg program is a multimedia continuous surgical education system on internet, that allows for teleeducation and teleconcertation between world experts in MIS. Then, the MASTER program (Minimal Access Surgery by Telecommunications and Robotics) allowed the development of the third millenium Operating room. It included Virtual Reality systems that delineate automatically anatomical and pathological structures of a patients from him CT-scan, and that allow for an interactive surgical planning and force-feed-back simulation. It also included a telesurgical robot named Zeus controlled by surgeons through telemanipulation system. Tests and validation shows that all these systems improved all steps of the surgical procedure: preoperatively due to a better continuous education and a computer assisted surgical planning, and peroperatively due to teleconcertation, telementoring and telemanipulation systems. Revolutionary tools for minimal invasive surgery learning, planning and performing are all ready available. These tools represents the first prototype of the computer assisted tele-robotical surgery that will be the future of surgery.

  14. Effects of Concreteness and Contiguity on Learning from Computer-Based Reference Maps

    ERIC Educational Resources Information Center

    Srinivasan, Sribhagyam; Lewis, Daphne D.; Crooks, Steven M.

    2006-01-01

    Today's technology has reached new heights that have not been fully implemented. One of the areas where technology has not yet reached its full potential is in education. This study examined the effects of concreteness of location names and contiguity of location names with textual information on learning from computer-based reference maps. The…

  15. Computer-Mediated Training Tools to Enhance Joint Task Force Cognitive Leadership Skills

    DTIC Science & Technology

    2007-04-01

    University); and 5d. TASK NUMBER Barclay Lewis (American Systems) 5e. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ...ple G am ing Platform D ecisive A ction for Training ..................................................... 43 6. Perform ance M etrics...Figure 15: Automated Performance Measurement System ................................................................... 48 iv COMPUTER-MEDIATED TRAINING

  16. Distance Learning: A Way of Life-Long Learning

    DTIC Science & Technology

    2005-09-01

    promise of future benefits. 15. SUBJECT TERMS training, educational technology , distributed learning , distance learning , collaboration, online instruction...knowledge." - Aristotle Introduction Modern learning technology assumes various names: distance learning , distributed training, computer-based...training, web-based learning , or advanced distributed learning . No matter the name, the basic concept is using computer technology for instruction with no

  17. Computational Methodology for Absolute Calibration Curves for Microfluidic Optical Analyses

    PubMed Central

    Chang, Chia-Pin; Nagel, David J.; Zaghloul, Mona E.

    2010-01-01

    Optical fluorescence and absorption are two of the primary techniques used for analytical microfluidics. We provide a thorough yet tractable method for computing the performance of diverse optical micro-analytical systems. Sample sizes range from nano- to many micro-liters and concentrations from nano- to milli-molar. Equations are provided to trace quantitatively the flow of the fundamental entities, namely photons and electrons, and the conversion of energy from the source, through optical components, samples and spectral-selective components, to the detectors and beyond. The equations permit facile computations of calibration curves that relate the concentrations or numbers of molecules measured to the absolute signals from the system. This methodology provides the basis for both detailed understanding and improved design of microfluidic optical analytical systems. It saves prototype turn-around time, and is much simpler and faster to use than ray tracing programs. Over two thousand spreadsheet computations were performed during this study. We found that some design variations produce higher signal levels and, for constant noise levels, lower minimum detection limits. Improvements of more than a factor of 1,000 were realized. PMID:22163573

  18. Real-time flutter analysis

    NASA Technical Reports Server (NTRS)

    Walker, R.; Gupta, N.

    1984-01-01

    The important algorithm issues necessary to achieve a real time flutter monitoring system; namely, the guidelines for choosing appropriate model forms, reduction of the parameter convergence transient, handling multiple modes, the effect of over parameterization, and estimate accuracy predictions, both online and for experiment design are addressed. An approach for efficiently computing continuous-time flutter parameter Cramer-Rao estimate error bounds were developed. This enables a convincing comparison of theoretical and simulation results, as well as offline studies in preparation for a flight test. Theoretical predictions, simulation and flight test results from the NASA Drones for Aerodynamic and Structural Test (DAST) Program are compared.

  19. Analytic Modeling of Pressurization and Cryogenic Propellant Conditions for Lunar Landing Vehicle

    NASA Technical Reports Server (NTRS)

    Corpening, Jeremy

    2010-01-01

    This slide presentation reviews the development, validation and application of the model to the Lunar Landing Vehicle. The model named, Computational Propellant and Pressurization Program -- One Dimensional (CPPPO), is used to model in this case cryogenic propellant conditions of the Altair Lunar lander. The validation of CPPPO was accomplished via comparison to an existing analytic model (i.e., ROCETS), flight experiment and ground experiments. The model was used to the Lunar Landing Vehicle perform a parametric analysis on pressurant conditions and to examine the results of unequal tank pressurization and draining for multiple tank designs.

  20. A TREETOPS simulation of the Hubble Space Telescope-High Gain Antenna interaction

    NASA Technical Reports Server (NTRS)

    Sharkey, John P.

    1987-01-01

    Virtually any project dealing with the control of a Large Space Structure (LSS) will involve some level of verification by digital computer simulation. While the Hubble Space Telescope might not normally be included in a discussion of LSS, it is presented to highlight a recently developed simulation and analysis program named TREETOPS. TREETOPS provides digital simulation, linearization, and control system interaction of flexible, multibody spacecraft which admit to a point-connected tree topology. The HST application of TREETOPS is intended to familiarize the LSS community with TREETOPS by presenting a user perspective of its key features.

  1. 40 CFR 68.170 - Prevention program/Program 2.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the process. (c) The name(s) of the chemical(s) covered. (d) The date of the most recent review or... (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Risk Management Plan § 68.170 Prevention program/Program... of completion of the most recent hazard review or update. (1) The expected date of completion of any...

  2. 40 CFR 68.170 - Prevention program/Program 2.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the process. (c) The name(s) of the chemical(s) covered. (d) The date of the most recent review or... (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Risk Management Plan § 68.170 Prevention program/Program... of completion of the most recent hazard review or update. (1) The expected date of completion of any...

  3. 40 CFR 68.170 - Prevention program/Program 2.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the process. (c) The name(s) of the chemical(s) covered. (d) The date of the most recent review or... (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Risk Management Plan § 68.170 Prevention program/Program... of completion of the most recent hazard review or update. (1) The expected date of completion of any...

  4. Origins of NASA names

    NASA Technical Reports Server (NTRS)

    Wells, H. T.; Whiteley, S. H.; Karegeannes, C. E.

    1976-01-01

    Names are selected for NASA spaceflight projects and programs from various sources. Some have their foundations in mythology and astrology or legend and folklore. Some have historic connotations; others are based on a description of their mission, often resulting in an acronym. Included are names of launch vehicles, spacecraft, manned spaceflight programs, sounding rockets, and NASA field installations. This study is limited to names of approved projects through 1974; it does not include names of numerous projects which have been or are being studied or projects that were canceled or postponed before reaching actual flight.

  5. U. S. Navy’s Superconductivity Programs; Scientific Curosity To Fleet Utility

    DTIC Science & Technology

    2010-10-01

    NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND...ADDRESS(ES) Naval Research Laboratory,Washington,DC,20375 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS...classes of materials studied for superconductivity were ternary alloys13, and organic materials14. The dilution refrigerator largely replaced

  6. Computer-Aided Drug Discovery Approaches against the Tropical Infectious Diseases Malaria, Tuberculosis, Trypanosomiasis, and Leishmaniasis.

    PubMed

    Njogu, Peter M; Guantai, Eric M; Pavadai, Elumalai; Chibale, Kelly

    2016-01-08

    Despite the tremendous improvement in overall global health heralded by the adoption of the Millennium Declaration in the year 2000, tropical infections remain a major health problem in the developing world. Recent estimates indicate that the major tropical infectious diseases, namely, malaria, tuberculosis, trypanosomiasis, and leishmaniasis, account for more than 2.2 million deaths and a loss of approximately 85 million disability-adjusted life years annually. The crucial role of chemotherapy in curtailing the deleterious health and economic impacts of these infections has invigorated the search for new drugs against tropical infectious diseases. The research efforts have involved increased application of computational technologies in mainstream drug discovery programs at the hit identification, hit-to-lead, and lead optimization stages. This review highlights various computer-aided drug discovery approaches that have been utilized in efforts to identify novel antimalarial, antitubercular, antitrypanosomal, and antileishmanial agents. The focus is largely on developments over the past 5 years (2010-2014).

  7. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  8. Self-Administered Computer Therapy for Apraxia of Speech: Two-Period Randomized Control Trial With Crossover.

    PubMed

    Varley, Rosemary; Cowell, Patricia E; Dyson, Lucy; Inglis, Lesley; Roper, Abigail; Whiteside, Sandra P

    2016-03-01

    There is currently little evidence on effective interventions for poststroke apraxia of speech. We report outcomes of a trial of self-administered computer therapy for apraxia of speech. Effects of speech intervention on naming and repetition of treated and untreated words were compared with those of a visuospatial sham program. The study used a parallel-group, 2-period, crossover design, with participants receiving 2 interventions. Fifty participants with chronic and stable apraxia of speech were randomly allocated to 1 of 2 order conditions: speech-first condition versus sham-first condition. Period 1 design was equivalent to a randomized controlled trial. We report results for this period and profile the effect of the period 2 crossover. Period 1 results revealed significant improvement in naming and repetition only in the speech-first group. The sham-first group displayed improvement in speech production after speech intervention in period 2. Significant improvement of treated words was found in both naming and repetition, with little generalization to structurally similar and dissimilar untreated words. Speech gains were largely maintained after withdrawal of intervention. There was a significant relationship between treatment dose and response. However, average self-administered dose was modest for both groups. Future software design would benefit from incorporation of social and gaming components to boost motivation. Single-word production can be improved in chronic apraxia of speech with behavioral intervention. Self-administered computerized therapy is a promising method for delivering high-intensity speech/language rehabilitation. URL: http://orcid.org/0000-0002-1278-0601. Unique identifier: ISRCTN88245643. © 2016 American Heart Association, Inc.

  9. 24th IUPAP Conference on Computational Physics (2012): Introduction, acknowledgements, program

    NASA Astrophysics Data System (ADS)

    Baiotti, Luca; Takabe, Hideaki

    2013-08-01

    Welcome to CCP2012, held next to the K computer site in Kobe and in Japan's best season. The Conference on Computational Physics (CCP) is organized annually under the auspices of Commission 20 of the IUPAP (International Union of Pure and Applied Physics). This is the first time it has been held in Japan. I was asked to be the chairman about two and half years ago and when I accepted the request I decided to make the conference very unique and different from the traditional style of CCP. I was not satisfied when I attended big conferences where the parallel sessions are classified with the name of the research field. These days we have many opportunities to attend domestic and international conferences, where it is possible to listen to many talks on the same topics. If the topics are very new, then the conference is very useful for my research. However, I wanted to have a conference where I could listen to a variety of topics carried out with the same method. Computational science is very unique and it is easy to organize a new type of conference with the classification in the horizontal direction of the matrix made of the names of research fields and the name of numerical methods. You may be able to list the names of methods easily; finite difference, Monte Carlo, particle, molecular dynamics and so on. I was dissatisfied to find that most conferences focus solely on research fields and the method that brings to the scientific research is not highlighted as much. I wanted to listen to topics from fundamental physics to industrial science in a systematic way. In order to create such a conference, a small number of experts is not enough, so I asked for the help of more than 100 Japanese computer scientists, in a variety of fields. We called this group the Japan Advisory Board (JAB). I asked them to recommend a member of the International Advisory Board (IAB). Then, we could start making the list of plenary and invited speakers. This was almost the end of March last year. CCP2012 is organized also to celebrate the shared use of the K computer and we selected a venue next to it. Its use is of course open to the public and started on 28 September, one month earlier than had been scheduled. I hope you also enjoy the guided tour of the K computer. Throughout CCP2012, I hope new collaborations start among scientists in different fields. It would be also my great pleasure if such an inter-disciplinary conference encouraged young scientists (with their fresh energy and skills) to challenge new topics in different fields, particularly emerging ones like bio-computing, industrial applications, social sciences and so on. Finally, allow me to express my sincere thanks to all members of the local organizing committee (LOC). Twenty scientists from three universities and one institute voluntarily worked very hard to prepare CCP2012. Hideaki Takabe (Aki) The Chairman, CCP2012

  10. Building Complex Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike

    2006-01-01

    The explosion of capabilities and new products within ICT (Information and Communication Technology) has fostered widespread, overly optimistic opinions regarding the industry, based on common but unjustified assumptions of quality and correctness of software. These assumptions are encouraged by software producers and vendors, who have not succeeded in finding a way to overcome the lack of an automated, mathematically sound way to develop correct systems from requirements. NASA faces this dilemma as it envisages advanced mission concepts in future exploration missions, which may well be the most ambitious computer-based systems ever developed. Such missions entail levels of complexity that beg for new methods for system development. NASA-led research in such areas as sensor networks, formal methods, autonomic computing, and requirements-based programming (to name but a few) will offer some innovative approaches to achieving correctness in complex system development.

  11. TOXCAST, A TOOL FOR CATEGORIZATION AND ...

    EPA Pesticide Factsheets

    Across several EPA Program Offices (e.g., OPPTS, OW, OAR), there is a clear need to develop strategies and methods to screen large numbers of chemicals for potential toxicity, and to use the resulting information to prioritize the use of testing resources towards those entities and endpoints that present the greatest likelihood of risk to human health and the environment. This need could be addressed using the experience of the pharmaceutical industry in the use of advanced modern molecular biology and computational chemistry tools for the development of new drugs, with appropriate adjustment to the needs and desires of environmental toxicology. A conceptual approach named ToxCast has been developed to address the needs of EPA Program Offices in the area of prioritization and screening. Modern computational chemistry and molecular biology tools bring enabling technologies forward that can provide information about the physical and biological properties of large numbers of chemicals. The essence of the proposal is to conduct a demonstration project based upon a rich toxicological database (e.g., registered pesticides, or the chemicals tested in the NTP bioassay program), select a fairly large number (50-100 or more chemicals) representative of a number of differing structural classes and phenotypic outcomes (e.g., carcinogens, reproductive toxicants, neurotoxicants), and evaluate them across a broad spectrum of information domains that modern technology has pro

  12. Update on PISCES

    NASA Technical Reports Server (NTRS)

    Pearson, Don; Hamm, Dustin; Kubena, Brian; Weaver, Jonathan K.

    2010-01-01

    An updated version of the Platform Independent Software Components for the Exploration of Space (PISCES) software library is available. A previous version was reported in Library for Developing Spacecraft-Mission-Planning Software (MSC-22983), NASA Tech Briefs, Vol. 25, No. 7 (July 2001), page 52. To recapitulate: This software provides for Web-based, collaborative development of computer programs for planning trajectories and trajectory- related aspects of spacecraft-mission design. The library was built using state-of-the-art object-oriented concepts and software-development methodologies. The components of PISCES include Java-language application programs arranged in a hierarchy of classes that facilitates the reuse of the components. As its full name suggests, the PISCES library affords platform-independence: The Java language makes it possible to use the classes and application programs with a Java virtual machine, which is available in most Web-browser programs. Another advantage is expandability: Object orientation facilitates expansion of the library through creation of a new class. Improvements in the library since the previous version include development of orbital-maneuver- planning and rendezvous-launch-window application programs, enhancement of capabilities for propagation of orbits, and development of a desktop user interface.

  13. 78 FR 16862 - Notice of Submission of Proposed Information Collection to OMB: Emergency Shelter Grants Program...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-19

    ... changed to match the new program name created through the HEARTH Act. To see the regulations for the new... match the new program name created through the HEARTH Act. To see the regulations for the new ESG...

  14. Preconditioner Circuit Analysis

    DTIC Science & Technology

    2011-09-01

    S) Matthew J. Nye 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 939435–000 8. PERFORMING... ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES) N/A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11...of the simulations and the theoretical computations. D. THESIS ORGANIZATION This thesis is organized into four chapters. The theoretical

  15. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with othermore » experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.« less

  16. Short Range Wireless Power Transfer (WPT) for UAV/UAS Battery Charging - Phase 1

    DTIC Science & Technology

    2014-12-01

    WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AND ADDRESS(ES) Department of Electrical and Computer Engineering 8...Research Computer Engineering iii THIS PAGE INTENTIONALLY LEFT BLANK iv ABSTRACT The...battery charging, spacecraft recharging and station keeping, and direct propulsion of UAVs and hovering airships . The client antenna is usually of low

  17. Path Expressions

    DTIC Science & Technology

    1975-06-01

    ORGANIZATION NAME AND ADDRESS Carnegie-Mellon University Computer Science Dept Pittsburgh, Pa 15213 II. CONTROLLING OFFICE NAMF AND ADDRESS...programmer. Example 1. A communciation between two procasses is initiated by declaring a buffer which can hold a message whose interpretation is Known...words, the functions named in a path are automatically embedded in a critical region specific for that path.) The computation of the next state in

  18. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    USGS Publications Warehouse

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  19. Design of a real-time wind turbine simulator using a custom parallel architecture

    NASA Technical Reports Server (NTRS)

    Hoffman, John A.; Gluck, R.; Sridhar, S.

    1995-01-01

    The design of a new parallel-processing digital simulator is described. The new simulator has been developed specifically for analysis of wind energy systems in real time. The new processor has been named: the Wind Energy System Time-domain simulator, version 3 (WEST-3). Like previous WEST versions, WEST-3 performs many computations in parallel. The modules in WEST-3 are pure digital processors, however. These digital processors can be programmed individually and operated in concert to achieve real-time simulation of wind turbine systems. Because of this programmability, WEST-3 is very much more flexible and general than its two predecessors. The design features of WEST-3 are described to show how the system produces high-speed solutions of nonlinear time-domain equations. WEST-3 has two very fast Computational Units (CU's) that use minicomputer technology plus special architectural features that make them many times faster than a microcomputer. These CU's are needed to perform the complex computations associated with the wind turbine rotor system in real time. The parallel architecture of the CU causes several tasks to be done in each cycle, including an IO operation and the combination of a multiply, add, and store. The WEST-3 simulator can be expanded at any time for additional computational power. This is possible because the CU's interfaced to each other and to other portions of the simulation using special serial buses. These buses can be 'patched' together in essentially any configuration (in a manner very similar to the programming methods used in analog computation) to balance the input/ output requirements. CU's can be added in any number to share a given computational load. This flexible bus feature is very different from many other parallel processors which usually have a throughput limit because of rigid bus architecture.

  20. Assessment of the Federal Voting Assistance Program Office Implementation of the Military and Overseas Voter Empowerment Act

    DTIC Science & Technology

    2012-08-31

    7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Department of Defense Inspector General,4800 Mark Center Drive,Alexandria,VA,22350-1500 8... PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S...Empowerment (MOVE) Act What We Did To determine if voting assistance programs carried out under the Uniformed and Overseas Absentee Voting Act

  1. Non-invasive brain stimulation and computational models in post-stroke aphasic patients: single session of transcranial magnetic stimulation and transcranial direct current stimulation. A randomized clinical trial.

    PubMed

    Santos, Michele Devido Dos; Cavenaghi, Vitor Breseghello; Mac-Kay, Ana Paula Machado Goyano; Serafim, Vitor; Venturi, Alexandre; Truong, Dennis Quangvinh; Huang, Yu; Boggio, Paulo Sérgio; Fregni, Felipe; Simis, Marcel; Bikson, Marom; Gagliardi, Rubens José

    2017-01-01

    Patients undergoing the same neuromodulation protocol may present different responses. Computational models may help in understanding such differences. The aims of this study were, firstly, to compare the performance of aphasic patients in naming tasks before and after one session of transcranial direct current stimulation (tDCS), transcranial magnetic stimulation (TMS) and sham, and analyze the results between these neuromodulation techniques; and secondly, through computational model on the cortex and surrounding tissues, to assess current flow distribution and responses among patients who received tDCS and presented different levels of results from naming tasks. Prospective, descriptive, qualitative and quantitative, double blind, randomized and placebo-controlled study conducted at Faculdade de Ciências Médicas da Santa Casa de São Paulo. Patients with aphasia received one session of tDCS, TMS or sham stimulation. The time taken to name pictures and the response time were evaluated before and after neuromodulation. Selected patients from the first intervention underwent a computational model stimulation procedure that simulated tDCS. The results did not indicate any statistically significant differences from before to after the stimulation.The computational models showed different current flow distributions. The present study did not show any statistically significant difference between tDCS, TMS and sham stimulation regarding naming tasks. The patients'responses to the computational model showed different patterns of current distribution.

  2. Genetic Network Programming with Reconstructed Individuals

    NASA Astrophysics Data System (ADS)

    Ye, Fengming; Mabu, Shingo; Wang, Lutao; Eto, Shinji; Hirasawa, Kotaro

    A lot of research on evolutionary computation has been done and some significant classical methods such as Genetic Algorithm (GA), Genetic Programming (GP), Evolutionary Programming (EP), and Evolution Strategies (ES) have been studied. Recently, a new approach named Genetic Network Programming (GNP) has been proposed. GNP can evolve itself and find the optimal solution. It is based on the idea of Genetic Algorithm and uses the data structure of directed graphs. Many papers have demonstrated that GNP can deal with complex problems in the dynamic environments very efficiently and effectively. As a result, recently, GNP is getting more and more attentions and is used in many different areas such as data mining, extracting trading rules of stock markets, elevator supervised control systems, etc., and GNP has obtained some outstanding results. Focusing on the GNP's distinguished expression ability of the graph structure, this paper proposes a method named Genetic Network Programming with Reconstructed Individuals (GNP-RI). The aim of GNP-RI is to balance the exploitation and exploration of GNP, that is, to strengthen the exploitation ability by using the exploited information extensively during the evolution process of GNP and finally obtain better performances than that of GNP. In the proposed method, the worse individuals are reconstructed and enhanced by the elite information before undergoing genetic operations (mutation and crossover). The enhancement of worse individuals mimics the maturing phenomenon in nature, where bad individuals can become smarter after receiving a good education. In this paper, GNP-RI is applied to the tile-world problem which is an excellent bench mark for evaluating the proposed architecture. The performance of GNP-RI is compared with that of the conventional GNP. The simulation results show some advantages of GNP-RI demonstrating its superiority over the conventional GNPs.

  3. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.

  4. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  5. Design of a robotic vehicle with self-contained intelligent wheels

    NASA Astrophysics Data System (ADS)

    Poulson, Eric A.; Jacob, John S.; Gunderson, Robert W.; Abbott, Ben A.

    1998-08-01

    The Center for Intelligent Systems has developed a small robotic vehicle named the Advanced Rover Chassis 3 (ARC 3) with six identical intelligent wheel units attached to a payload via a passive linkage suspension system. All wheels are steerable, so the ARC 3 can move in any direction while rotating at any rate allowed by the terrain and motors. Each intelligent wheel unit contains a drive motor, steering motor, batteries, and computer. All wheel units are identical, so manufacturing, programing, and spare replacement are greatly simplified. The intelligent wheel concept would allow the number and placement of wheels on the vehicle to be changed with no changes to the control system, except to list the position of all the wheels relative to the vehicle center. The task of controlling the ARC 3 is distributed between one master computer and the wheel computers. Tasks such as controlling the steering motors and calculating the speed of each wheel relative to the vehicle speed in a corner are dependent on the location of a wheel relative to the vehicle center and ar processed by the wheel computers. Conflicts between the wheels are eliminated by computing the vehicle velocity control in the master computer. Various approaches to this distributed control problem, and various low level control methods, have been explored.

  6. High Performance Computing and Visualization Infrastructure for Simultaneous Parallel Computing and Parallel Visualization Research

    DTIC Science & Technology

    2016-11-09

    Total Number: Sub Contractors (DD882) Names of Personnel receiving masters degrees Names of personnel receiving PHDs Names of other research staff...Broadcom 5720 QP 1Gb Network Daughter Card (2) Intel Xeon E5-2680 v3 2.5GHz, 30M Cache, 9.60GT/s QPI, Turbo, HT , 12C/24T (120W...Broadcom 5720 QP 1Gb Network Daughter Card (2) Intel Xeon E5-2680 v3 2.5GHz, 30M Cache, 9.60GT/s QPI, Turbo, HT , 12C/24T (120W

  7. Defense Advanced Research Projects Agency: Key Factors Drive Transition of Technologies, but Better Training and Data Dissemination Can Increase Success

    DTIC Science & Technology

    2015-11-01

    more detail. Table 1: Overview of DARPA Programs Selected for GAO Case Study Analyses Program name Program description Advanced Wireless Networks ...Selected DARPA Programs Program name According to DARPA portfolio-level database According to GAO analysis Advanced Wireless Networks for the Soldier...with potential transition partners Achievement of clearly defined technical goals Successful transition Advanced Wireless Networks for Soldier

  8. Running High-Throughput Jobs on Peregrine | High-Performance Computing |

    Science.gov Websites

    unique name (using "name=") and usse the task name to create a unique output file name. For runs on and how many tasks to give to each worker at a time using the NITRO_COORD_OPTIONS environment . Finally, you start Nitro by executing launch_nitro.sh. Sample Nitro job script To run a job using the

  9. Gigascale Silicon Research Center for Design and Test

    DTIC Science & Technology

    2000-01-07

    students Kanna Shimizu and Chris Wilson participated in a meeting at Intel hosted by Mani Azimi, with Moenes, Ching-Tsun, Fred Rastgar, and Mani...Prof. David Dill Researchers: Kanna Shimizu Bus specifications are currently informal, resulting in ambiguities and inconsistencies. We’ve been...Expected Graduation: 6/1/2000 Advisor: Dill Last Name: Shimizu First Name: Kanna Work Address: Department of Computer Science, Gates Computer Science

  10. Computational Electromagnetics Application to Small Geometric Anomalies and Associated Ucertainty Evaluation

    DTIC Science & Technology

    2010-02-28

    implemented a fast method to enable the statistical characterization of electromagnetic interference and compatibility (EMI/EMC) phenomena on electrically...higher accuracy is needed, e.g., to compute higher moment statistics . To address this problem, we have developed adaptive stochastic collocation methods ...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) AF OFFICE OF SCIENTIFIC RESEARCH 875 N. RANDOLPH ST. ROOM 3112 ARLINGTON VA 22203 UA

  11. Discriminative Learning with Markov Logic Networks

    DTIC Science & Technology

    2009-10-01

    Discriminative Learning with Markov Logic Networks Tuyen N. Huynh Department of Computer Sciences University of Texas at Austin Austin, TX 78712...emerging area of research that addresses the problem of learning from noisy structured/relational data. Markov logic networks (MLNs), sets of weighted...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Texas at Austin,Department of Computer

  12. CMGTooL user's manual

    USGS Publications Warehouse

    Xu, Jingping; Lightsom, Fran; Noble, Marlene A.; Denham, Charles

    2002-01-01

    During the past several years, the sediment transport group in the Coastal and Marine Geology Program (CMGP) of the U. S. Geological Survey has made major revisions to its methodology of processing, analyzing, and maintaining the variety of oceanographic time-series data. First, CMGP completed the transition of the its oceanographic time-series database to a self-documenting NetCDF (Rew et al., 1997) data format. Second, CMGP’s oceanographic data variety and complexity have been greatly expanded from traditional 2-dimensional, single-point time-series measurements (e.g., Electro-magnetic current meters, transmissometers) to more advanced 3-dimensional and profiling time-series measurements due to many new acquisitions of modern instruments such as Acoustic Doppler Current Profiler (RDI, 1996), Acoustic Doppler Velocitimeter, Pulse-Coherence Acoustic Doppler Profiler (SonTek, 2001), Acoustic Bacscatter Sensor (Aquatec, 1001001001001001001). In order to accommodate the NetCDF format of data from the new instruments, a software package of processing, analyzing, and visualizing time-series oceanographic data was developed. It is named CMGTooL. The CMGTooL package contains two basic components: a user-friendly GUI for NetCDF file analysis, processing and manipulation; and a data analyzing program library. Most of the routines in the library are stand-alone programs suitable for batch processing. CMGTooL is written in MATLAB computing language (The Mathworks, 1997), therefore users must have MATLAB installed on their computer in order to use this software package. In addition, MATLAB’s Signal Processing Toolbox is also required by some CMGTooL’s routines. Like most MATLAB programs, all CMGTooL codes are compatible with different computing platforms including PC, MAC, and UNIX machines (Note: CMGTooL has been tested on different platforms that run MATLAB 5.2 (Release 10) or lower versions. Some of the commands related to MAC may not be compatible with later releases of MATLAB). The GUI and some of the library routines call low-level NetCDF file I/O, variable and attribute functions. These NetCDF exclusive functions are supported by a MATLAB toolbox named NetCDF, created by Dr. Charles Denham . This toolbox has to be installed in order to use the CMGTooL GUI. The CMGTooL GUI calls several routines that were initially developed by others. The authors would like to acknowledge the following scientists for their ideas and codes: Dr. Rich Signell (USGS), Dr. Chris Sherwood (USGS), and Dr. Bob Beardsley (WHOI). Many special terms that carry special meanings in either MATLAB or the NetCDF Toolbox are used in this manual. Users are encouraged to read the documents of MATLAB and NetCDF for references.

  13. Film labels: a new look.

    PubMed

    Hunter, T B

    1994-02-01

    Every diagnostic image should be properly labeled. To improve the labeling of radiographs in the Department of Radiology at the University Medical Center, Tucson, Arizona, a special computer program was written to control the printing of the department's film flashcards. This program captures patient data from the hospital's radiology information system and uses it to create a film flashcard that contains the patient's name, hospital number, date of birth, age, the time the patient checked into the radiology department, and the date of the examination. The resulting film labels are legible and aesthetically pleasing. Having the patient's age and date of birth on the labels is a useful quality assurance measure to make certain the proper study has been performed on the correct patient. All diagnostic imaging departments should institute measures to assure their film labeling is as legible and informative as possible.

  14. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual, appendix 2

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    The FORTRAN programs RANDOM3 and RANDOM4 are documented. They are based on fatigue strength reduction, using a probabilistic constitutive model. They predict the random lifetime of an engine component to reach a given fatigue strength. Included in this user manual are details regarding the theoretical backgrounds of RANDOM3 and RANDOM4. Appendix A gives information on the physical quantities, their symbols, FORTRAN names, and both SI and U.S. Customary units. Appendix B and C include photocopies of the actual computer printout corresponding to the sample problems. Appendices D and E detail the IMSL, Version 10(1), subroutines and functions called by RANDOM3 and RANDOM4 and SAS/GRAPH(2) programs that can be used to plot both the probability density functions (p.d.f.) and the cumulative distribution functions (c.d.f.).

  15. Improving Control of Two Motor Controllers

    NASA Technical Reports Server (NTRS)

    Toland, Ronald W.

    2004-01-01

    A computer program controls motors that drive translation stages in a metrology system that consists of a pair of two-axis cathetometers. This program is specific to Compumotor Gemini (or equivalent) motors and the Compumotor 6K-series (or equivalent) motor controller. Relative to the software supplied with the controller, this program affords more capabilities and is easier to use. Written as a Virtual Instrument in the LabVIEW software system, the program presents an imitation control panel that the user can manipulate by use of a keyboard and mouse. There are three modes of operation: command, movement, and joystick. In command mode, single commands are sent to the controller for troubleshooting. In movement mode, distance, speed, and/or acceleration commands are sent to the controller. Position readouts from the motors and from position encoders on the translation stages are displayed in marked fields. At any time, the position readouts can be recorded in a file named by the user. In joystick mode, the program yields control of the motors to a joystick. The program sends commands to, and receives data from, the controller via a serial cable connection, using the serial-communication portion of the software supplied with the controller.

  16. Paternity analysis in Excel.

    PubMed

    Rocheta, Margarida; Dionísio, F Miguel; Fonseca, Luís; Pires, Ana M

    2007-12-01

    Paternity analysis using microsatellite information is a well-studied subject. These markers are ideal for parentage studies and fingerprinting, due to their high-discrimination power. This type of data is used to assign paternity, to compute the average selfing and outcrossing rates and to estimate the biparental inbreeding. There are several public domain programs that compute all this information from data. Most of the time, it is necessary to export data to some sort of format, feed it to the program and import the output to an Excel book for further processing. In this article we briefly describe a program referred from now on as Paternity Analysis in Excel (PAE), developed at IST and IBET (see the acknowledgments) that computes paternity candidates from data, and other information, from within Excel. In practice this means that the end user provides the data in an Excel sheet and, by pressing an appropriate button, obtains the results in another Excel sheet. For convenience PAE is divided into two modules. The first one is a filtering module that selects data from the sequencer and reorganizes it in a format appropriate to process paternity analysis, assuming certain conventions for the names of parents and offspring from the sequencer. The second module carries out the paternity analysis assuming that one parent is known. Both modules are written in Excel-VBA and can be obtained at the address (www.math.ist.utl.pt/~fmd/pa/pa.zip). They are free for non-commercial purposes and have been tested with different data and against different software (Cervus, FaMoz, and MLTR).

  17. Deadlock Detection in Computer Networks

    DTIC Science & Technology

    1977-09-01

    it entity class name (ndm-procownerref) = -:"node tab5le" I procnode_name z res-rnode-name call then return; nc ll c eck -for-deadlock(p_obplref...demo12 ~-exlusive sae con Caobridg Fina Sttonaa con0 Official Distribution List Defense Documentation Center New York Area Office Cameron Station 715

  18. CADNA_C: A version of CADNA for use with C or C++ programs

    NASA Astrophysics Data System (ADS)

    Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne

    2010-11-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.

  19. Incremental Parsing with Reference Interaction

    DTIC Science & Technology

    2004-07-01

    ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Department of...Computer Science,University of Rochester,Rochester,NY,14627 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND...Evidence from eye movements in spoken language comprehen- sion. Conference Abstract. Architechtures and Mechanisms for Language Processing. R. M

  20. Fast Surface Reconstruction and Segmentation with Terrestrial LiDAR Range Data

    DTIC Science & Technology

    2009-05-18

    UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of California at Berkeley,Department of Electrical Engineering and Computer...Sciences,Berkeley,CA,94720 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S...ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION /AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13

  1. Catalog of US GeoData

    USGS Publications Warehouse

    ,

    1990-01-01

    The development of geographic information systems (GIS) is a rapidly growing industry that supports natural resources, studies, land management, environmental analysis, and urban and transporation planning. The increasing use of computers for storing and analyzing earth science information has greatly expanded the demand for digital cartographic and geographic data. Digital cartography involves the collection, storage, processing, analysis, and display of map data with the aid of computers. The U.S. Geological Survey (USGS), the Nation's largest earth science research agency, through its National Mapping Program, has expanded digital cartography operations to include the collection of elevation, planimetric, land use and land cover, and geographic names information in digital form. This digital information is available on 9-track magnetic tapes and, in the case of 1:2,000,000-scale planimetric digital line graph data, in Compact Disc Read Only Memory (CD-ROM) format. Digital information can be used with all types of geographic and land information systems.

  2. 3D Object Recognition: Symmetry and Virtual Views

    DTIC Science & Technology

    1992-12-01

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial

  3. Network Support for Group Coordination

    DTIC Science & Technology

    2000-01-01

    telecommuting and ubiquitous computing [40], the advent of networked multimedia, and less expensive technology have shifted telecollaboration into...of Computer Engineering,Santa Cruz,CA,95064 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/ MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10...participants A and B, the payoff structure for choosing two actions i and j is P = Aij + Bij . If P = 0, then the interaction is called a zero -sum game, and

  4. Design Considerations for Computer-Based Interactive Map Display Systems

    DTIC Science & Technology

    1979-02-01

    11 Five Dimensions for Map Display System Options . . . . . . . . . . . . . . . 12 Summary of...most advanced and exotic technologies- space , optical, computer, and graphic pro- duction; the focusing of vast organizational efforts; and the results...Information retrieval: "Where are all the radar sites in sector 12 ?," "What’s the name of this hill?," "Where’s the hill named B243?" Information storage

  5. Computational Study of Inlet Active Flow Control

    DTIC Science & Technology

    2007-05-01

    AFRL-VA-WP-TR-2007-3077 COMPUTATIONAL STUDY OF INLET ACTIVE FLOW CONTROL Delivery Order 0005 Dr. Sonya T. Smith Howard University Department...NUMBER A0A2 5e. TASK NUMBER 6. AUTHOR(S) Dr. Sonya T. Smith ( Howard University ) Dr. Angela Scribben and Matthew Goettke (AFRL/VAAI) 5f...WORK UNIT NUMBER 0B 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Howard University Department of Mechanical

  6. Mantle Convection on Modern Supercomputers

    NASA Astrophysics Data System (ADS)

    Weismüller, J.; Gmeiner, B.; Huber, M.; John, L.; Mohr, M.; Rüde, U.; Wohlmuth, B.; Bunge, H. P.

    2015-12-01

    Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures is handled successfully only in an interdisciplinary context. A new priority program - named SPPEXA - by the German Research Foundation (DFG) addresses this issue, and brings together computer scientists, mathematicians and application scientists around grand challenges in HPC. Here we report from the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection and assess the impact of small scale processes on global mantle flow.

  7. Object-oriented design and programming in medical decision support.

    PubMed

    Heathfield, H; Armstrong, J; Kirkham, N

    1991-12-01

    The concept of object-oriented design and programming has recently received a great deal of attention from the software engineering community. This paper highlights the realisable benefits of using the object-oriented approach in the design and development of clinical decision support systems. These systems seek to build a computational model of some problem domain and therefore tend to be exploratory in nature. Conventional procedural design techniques do not support either the process of model building or rapid prototyping. The central concepts of the object-oriented paradigm are introduced, namely encapsulation, inheritance and polymorphism, and their use illustrated in a case study, taken from the domain of breast histopathology. In particular, the dual roles of inheritance in object-oriented programming are examined, i.e., inheritance as a conceptual modelling tool and inheritance as a code reuse mechanism. It is argued that the use of the former is not entirely intuitive and may be difficult to incorporate into the design process. However, inheritance as a means of optimising code reuse offers substantial technical benefits.

  8. Using EnergyPlus for California Title-24 compliancecalculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Joe; Bourassa, Norman; Buhl, Fred

    2006-08-26

    For the past decade, the non-residential portion of California's Title-24 building energy standard has relied on DOE-2.1E as the reference computer simulation program for development as well as compliance. However, starting in 2004, the California Energy Commission has been evaluating the possible use of Energy Plus as the reference program in future revisions of Title-24. As part of this evaluation, the authors converted the Alternate Compliance Method (ACM) certification test suite of 150 DOE-2 files to Energy Plus, and made parallel DOE-2 and Energy Plus runs for this extensive set of test cases. A customized version of DOE-2.1E named doe2epmore » was developed to automate the conversion process. This paper describes this conversion process, including the difficulties in establishing an apples-to-apples comparison between the two programs, and summarizes how the DOE-2 and Energy Plus results compare for the ACM test cases.« less

  9. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  10. Trajectory and Force Control of a Direct Drive Arm.

    DTIC Science & Technology

    1986-09-01

    NUMBER 7. AUTNOn(s) . CONTRACT OR GRANT Ity൓"s ) Chae H. An N00014-80- C -- . O , 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT...and notations are used: 0 -W, WV CW { o ][:][ x] c (2.9) Wd X C -" W, 0 -W, cy -"X C (2) _-t V Wz 0 C , 11 󈧏 W, wy W, 0 0 0 112 112 W 0[ W. I0l (2.10) 0...theorem is used to compute the inertia terms translated to the center of mass of the load. C (2.16) )M Ji = P1- tn[(#TE)l -- (ZT)] (2.17) Then, the

  11. A practice course to cultivate students' comprehensive ability of photoelectricity

    NASA Astrophysics Data System (ADS)

    Lv, Yong; Liu, Yang; Niu, Chunhui; Liu, Lishuang

    2017-08-01

    After the studying of many theoretical courses, it's important and urgent for the students from specialty of optoelectronic information science and engineering to cultivate their comprehensive ability of photoelectricity. We set up a comprehensive practice course named "Integrated Design of Optoelectronic Information System" (IDOIS) for the purpose that students can integrate their knowledge of optics, electronics and computer programming to design, install and debug an optoelectronic system with independent functions. Eight years of practice shows that this practice course can train students' ability of analysis, design/development and debugging of photoelectric system, improve their ability in document retrieval, design proposal and summary report writing, teamwork, innovation consciousness and skill.

  12. The ISOLDE control system

    NASA Astrophysics Data System (ADS)

    Deloose, I.; Pace, A.

    1994-12-01

    The two CERN isotope separators named ISOLDE have been running on the new Personal Computer (PC) based control system since April 1992. The new architecture that makes heavy use of the commercial software and hardware of the PC market has been implemented on the 1700 geographically distributed control channels of the two separators and their experimental area. Eleven MSDOS Intel-based PCs with approximately 80 acquisition and control boards are used to access the equipment and are controlled from three PCs running Microsoft Windows used as consoles through a Novell Local Area Network. This paper describes the interesting solutions found and discusses the reduced programming workload and costs that have been obtained.

  13. Large Scale Portability of Hospital Information System Software

    PubMed Central

    Munnecke, Thomas H.; Kuhn, Ingeborg M.

    1986-01-01

    As part of its Decentralized Hospital Computer Program (DHCP) the Veterans Administration installed new hospital information systems in 169 of its facilities during 1984 and 1985. The application software for these systems is based on the ANS MUMPS language, is public domain, and is designed to be operating system and hardware independent. The software, developed by VA employees, is built upon a layered approach, where application packages layer on a common data dictionary which is supported by a Kernel of software. Communications between facilities are based on public domain Department of Defense ARPA net standards for domain naming, mail transfer protocols, and message formats, layered on a variety of communications technologies.

  14. Filtered epithermal quasi-monoenergetic neutron beams at research reactor facilities.

    PubMed

    Mansy, M S; Bashter, I I; El-Mesiry, M S; Habib, N; Adib, M

    2015-03-01

    Filtered neutron techniques were applied to produce quasi-monoenergetic neutron beams in the energy range of 1.5-133keV at research reactors. A simulation study was performed to characterize the filter components and transmitted beam lines. The filtered beams were characterized in terms of the optimal thickness of the main and additive components. The filtered neutron beams had high purity and intensity, with low contamination from the accompanying thermal emission, fast neutrons and γ-rays. A computer code named "QMNB" was developed in the "MATLAB" programming language to perform the required calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A Framework for Understanding Physics Students' Computational Modeling Practices

    NASA Astrophysics Data System (ADS)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by their existing physics content knowledge, particularly their knowledge of analytic procedures. While this existing knowledge was often applied in inappropriate circumstances, the students were still able to display a considerable amount of understanding of the physics content and of analytic solution procedures. These observations could not be adequately accommodated by the existing literature of programming comprehension. In extending the resource framework to the task of computational modeling, I model students' practices in terms of three important elements. First, a knowledge base includes re- sources for understanding physics, math, and programming structures. Second, a mechanism for monitoring and control describes students' expectations as being directed towards numerical, analytic, qualitative or rote solution approaches and which can be influenced by the problem representation. Third, a set of solution approaches---many of which were identified in this study---describe what aspects of the knowledge base students use and how they use that knowledge to enact their expectations. This framework allows us as researchers to track student discussions and pinpoint the source of difficulties. This work opens up many avenues of potential research. First, this framework gives researchers a vocabulary for extending Resource Theory to other domains of instruction, such as modeling how physics students use graphs. Second, this framework can be used as the basis for modeling expert physicists' programming practices. Important instructional implications also follow from this research. Namely, as we broaden the use of computational modeling in the physics classroom, our instructional practices should focus on helping students understand the step-by-step nature of programming in contrast to the already salient analytic procedures.

  16. We look like our names: The manifestation of name stereotypes in facial appearance.

    PubMed

    Zwebner, Yonat; Sellier, Anne-Laure; Rosenfeld, Nir; Goldenberg, Jacob; Mayo, Ruth

    2017-04-01

    Research demonstrates that facial appearance affects social perceptions. The current research investigates the reverse possibility: Can social perceptions influence facial appearance? We examine a social tag that is associated with us early in life-our given name. The hypothesis is that name stereotypes can be manifested in facial appearance, producing a face-name matching effect , whereby both a social perceiver and a computer are able to accurately match a person's name to his or her face. In 8 studies we demonstrate the existence of this effect, as participants examining an unfamiliar face accurately select the person's true name from a list of several names, significantly above chance level. We replicate the effect in 2 countries and find that it extends beyond the limits of socioeconomic cues. We also find the effect using a computer-based paradigm and 94,000 faces. In our exploration of the underlying mechanism, we show that existing name stereotypes produce the effect, as its occurrence is culture-dependent. A self-fulfilling prophecy seems to be at work, as initial evidence shows that facial appearance regions that are controlled by the individual (e.g., hairstyle) are sufficient to produce the effect, and socially using one's given name is necessary to generate the effect. Together, these studies suggest that facial appearance represents social expectations of how a person with a specific name should look. In this way a social tag may influence one's facial appearance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. The checkpoint ordering problem

    PubMed Central

    Hungerländer, P.

    2017-01-01

    Abstract We suggest a new variant of a row layout problem: Find an ordering of n departments with given lengths such that the total weighted sum of their distances to a given checkpoint is minimized. The Checkpoint Ordering Problem (COP) is both of theoretical and practical interest. It has several applications and is conceptually related to some well-studied combinatorial optimization problems, namely the Single-Row Facility Layout Problem, the Linear Ordering Problem and a variant of parallel machine scheduling. In this paper we study the complexity of the (COP) and its special cases. The general version of the (COP) with an arbitrary but fixed number of checkpoints is NP-hard in the weak sense. We propose both a dynamic programming algorithm and an integer linear programming approach for the (COP) . Our computational experiments indicate that the (COP) is hard to solve in practice. While the run time of the dynamic programming algorithm strongly depends on the length of the departments, the integer linear programming approach is able to solve instances with up to 25 departments to optimality. PMID:29170574

  18. Plasma Separation Process: Betacell (BCELL) code: User's manual. [Bipolar barrier junction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taherzadeh, M.

    1987-11-13

    The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the plasma separation program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation andmore » source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison. 16 refs.« less

  19. The New Web-Based Hera Data Processing System at the HEASARC

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2011-01-01

    The HEASARC at NASA/GSFC has provide an on-line astronomical data processing system called Hera for several years. Hera provides a complete data processing environment, including installed software packages, local data storage, and the CPU resources needed to process the user's data. The original design of Hera, however, has 2 requirements that has limited it's usefulness for some users, namely, that 1) the user must download and install a small helper program on their own computer before using Hera, and 2) Hera requires that several computer ports/sockets be allowed to communicate through any local firewalls on the users machine. Both of these restrictions can be problematic for some users, therefore we are now migrating Hera into a purely Web based environment which only requires a standard Web browser. The first release of Web Hera is now publicly available at http://heasarc.gsfc.nasa.gov/webheara/. It currently provides a standard graphical interface for running hundreds of different data processing programs that are available in the HEASARC's ftools software package. Over the next year we to add more features to Web Hera, including an interactive command line interface, and more display and line capabilities.

  20. Interactive flutter analysis and parametric study for conceptual wing design

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1995-01-01

    An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate the flutter instability boundary of a flexible cantilever wing, when well defined structural and aerodynamic data are not available, and then study the effect of change in Mach number, dynamic pressure, torsional frequency, sweep, mass ratio, aspect ratio, taper ratio, center of gravity, and pitch inertia, to guide the development of the concept. The software was developed on MathCad (trademark) platform for Macintosh, with integrated documentation, graphics, database and symbolic mathematics. The analysis method was based on nondimensional parametric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on torsional stiffness, sweep, mass ratio, aspect ratio, center of gravity location and pitch inertia radius of gyration. The plots were compiled in a Vaught Corporation report from a vast database of past experiments and wind tunnel tests. The computer program was utilized for flutter analysis of the outer wing of a Blended Wing Body concept, proposed by McDonnell Douglas Corporation. Using a set of assumed data, preliminary flutter boundary and flutter dynamic pressure variation with altitude, Mach number and torsional stiffness were determined.

  1. Parametric-Studies and Data-Plotting Modules for the SOAP

    NASA Technical Reports Server (NTRS)

    2008-01-01

    "Parametric Studies" and "Data Table Plot View" are the names of software modules in the Satellite Orbit Analysis Program (SOAP). Parametric Studies enables parameterization of as many as three satellite or ground-station attributes across a range of values and computes the average, minimum, and maximum of a specified metric, the revisit time, or 21 other functions at each point in the parameter space. This computation produces a one-, two-, or three-dimensional table of data representing statistical results across the parameter space. Inasmuch as the output of a parametric study in three dimensions can be a very large data set, visualization is a paramount means of discovering trends in the data (see figure). Data Table Plot View enables visualization of the data table created by Parametric Studies or by another data source: this module quickly generates a display of the data in the form of a rotatable three-dimensional-appearing plot, making it unnecessary to load the SOAP output data into a separate plotting program. The rotatable three-dimensionalappearing plot makes it easy to determine which points in the parameter space are most desirable. Both modules provide intuitive user interfaces for ease of use.

  2. 42 CFR 460.32 - Content and terms of PACE program agreement.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... phone number of the program director. (ii) Name of all governing body members. (iii) Name and phone number of a contact person for the governing body. (5) A participant bill of rights approved by CMS and...

  3. 42 CFR 460.32 - Content and terms of PACE program agreement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... phone number of the program director. (ii) Name of all governing body members. (iii) Name and phone number of a contact person for the governing body. (5) A participant bill of rights approved by CMS and...

  4. 42 CFR 460.32 - Content and terms of PACE program agreement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... phone number of the program director. (ii) Name of all governing body members. (iii) Name and phone number of a contact person for the governing body. (5) A participant bill of rights approved by CMS and...

  5. Message passing interface and multithreading hybrid for parallel molecular docking of large databases on petascale high performance computing machines.

    PubMed

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2013-04-30

    A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. To exploit the typical cluster-type supercomputers, thousands of docking calculations were dispatched by the master process to run simultaneously on thousands of slave processes, where each docking calculation takes one slave process on one node, and within the node each docking calculation runs via multithreading on multiple CPU cores and shared memory. Input and output of the program and the data handling within the program were carefully designed to deal with large databases and ultimately achieve HPC on a large number of CPU cores. Parallel performance analysis of the VinaLC program shows that the code scales up to more than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated against the DUD data set by the re-docking of X-ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good early recovery of actives. Copyright © 2013 Wiley Periodicals, Inc.

  6. Naming Their World in a Culturally Responsive Space: Experiences of Hmong Adolescents in an After-School Theatre Program

    ERIC Educational Resources Information Center

    Ngo, Bic

    2017-01-01

    This article draws on ethnographic research of a youth theatre program within a Hmong arts organization to explore the ways in which a culturally responsive program nurtured critical consciousness among Hmong immigrant youth. Hmong youth "named" struggles with stereotypes and acculturation expectations, and constructed positive ethnic…

  7. An investigation of the critical components of a land ethic: An application of Q methodology

    NASA Astrophysics Data System (ADS)

    Spradling, Suzanne Shaw

    Scope and method of study. The purpose of this study was to reveal the underlying structure of the beliefs of a sample of environmental educators regarding the critical components of a land or environmental ethic. Participants in the study were 30 environmental educators from seven states. All had been trained in one or more of the following national environmental education programs: Project WILD, Project WET, Project Learning Tree, Leopold Education Project, or Leave No Trace. Ages of the participants ranged from 18--63 years. Q methodology directed the study. Each participant completed a Q-sort of 54 statements related to environmental ethics. The data were analyzed using a computer program PQMethod 2.06. This program performed a correlation matrix as input data for factor analysis, and a VARIMAX rotation. Participant demographic data were collected in order to provide a more complete picture of the revealed structure of beliefs. Findings and conclusions. A three-factor solution was revealed from the analysis of the data. These factors represent the groupings of the participants with like beliefs in reference to the critical components of environmental ethics. Factor one was named Nature's Advocates. These individuals believe in equal rights for all parts of the environment. Factor two was named Nature's Stewards because of the revealed belief that humans were to have dominion over the earth given to them by the creator and that natural resources should be used responsibly. Factor three was named Nature's Romantics because of their belief that nature should be preserved for its aesthetic value and because of their naive approach to conservation. The demographic data added detail to the portrait created from the Q-sort data analysis. It is important then, to take into consideration what environmental educators believe about environmental ethics in designing meaningful curriculum that seeks to foster the development of those ethics. This study reveals the beliefs of a sample of environmental educators relating to environmental ethics critical components.

  8. Computational steering of GEM based detector simulations

    NASA Astrophysics Data System (ADS)

    Sheharyar, Ali; Bouhali, Othmane

    2017-10-01

    Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.

  9. Sentence Comprehension: A Parallel Distributed Processing Approach

    DTIC Science & Technology

    1989-07-14

    NAME OF PERFORMING ORGANIZATION 6b OFFICE SYMBOL 7a NAME OF MONITORING ORGANIZATION (If aodkawe) Computer Sciences DivisionCarnegie-Mellon University...Pittsburgh, Pennsylvania 15213. Arlington, Virginia 22217-5000 $a. NAME OF FUNOINGISPONSORING Sb OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT IOENTIFICATION...3 OTIC USERS 22a NAME OF RESPONSIBLE INOIVI0UAL 22b TELEPHONE (include Area COd) 22c. OFFICE SYMBOL Dr. Alan L. Meyrowitz (202) 696-4302 N00014 D0

  10. Fast parallel tandem mass spectral library searching using GPU hardware acceleration.

    PubMed

    Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K; Martin, Daniel B

    2011-06-03

    Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate-limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper, we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching), is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA, which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment.

  11. Computational Cognitive Neuroscience Modeling of Sequential Skill Learning

    DTIC Science & Technology

    2016-09-21

    101 EAST 27TH STREET STE 4308 AUSTIN , TX 78712 09/21/2016 Final Report DISTRIBUTION A: Distribution approved for public release. Air Force Research ...5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) The University of Texas at Austin 108 E Dean Keeton Stop A8000 Austin , TX ...AFRL-AFOSR-VA-TR-2016-0320 Computational Cognitive Neuroscience Modeling of Sequential Skill Learning David Schnyer UNIVERSITY OF TEXAS AT AUSTIN

  12. A Context Menu for the Real World: Controlling Physical Appliances Through Head-Worn Infrared Targeting

    DTIC Science & Technology

    2013-12-10

    Edward A. Lee Björn Hartmann Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2013-200...NAME(S) AND ADDRESS(ES) University of California at Berkeley, Electrical Engineering and Computer Sciences,Berkeley,CA,94720 8. PERFORMING...movement. PHYSICAL TARGET ACQUISITION STUDY To understand the accuracy and performance of head- orientation-based selection through our device, we car - ried

  13. A Context Menu for the Real World: Controlling Physical Appliances through Head-Worn Infrared Targeting

    DTIC Science & Technology

    2013-11-04

    Edward A. Lee Björn Hartmann Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2013-182...NAME(S) AND ADDRESS(ES) University of California at Berkeley, Electrical Engineering and Computer Sciences,Berkeley,CA,94720 8. PERFORMING...accuracy and performance of head- orientation-based selection through our device, we car - ried out a comparative target acquisition study, where

  14. Computer-Aided Detection of Mammographic Masses in Dense Breast Images

    DTIC Science & Technology

    2005-06-01

    Kinnard, Ph.D. CONTRACTING ORGANIZATION: Howard University Washington, DC 20059 REPORT DATE: June 2005 TYPE OF REPORT: Annual Summary PREPARED FOR: U.S...AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Howard University Washington, DC 20059 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES...34, Preparing for the Postdoctoral Institute, August, 2004, Howard University and The University of Texas at El Paso. 2. "Computer-Aided Diagnosis and Image

  15. Computing the Algebraic Immunity of Boolean Functions on the SRC-6 Reconfigurable Computer

    DTIC Science & Technology

    2012-03-01

    and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2 . REPORT DATE March 2012 3. REPORT... CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES) N/A 10. SPONSORING...developed for this conversion. This reduced form requires many fewer gates and has ( )n delay versus ( 2 ) n delay for a full transeunt triangle

  16. French MALE UAV Program

    DTIC Science & Technology

    2003-09-02

    ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) MoD- France 8...1French Air Force MINISTÈRE DE LA DÉFENSE 1 SIDM CONOPS 2 FAF IMAGERY ARCHITECTURE 3 FUTURE FRENCH MALE UAV PROGRAM FRENCH MALE UAV PROGRAM Report...2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE French Male UAV Program 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  17. Using a voice to put a name to a face: the psycholinguistics of proper name comprehension.

    PubMed

    Barr, Dale J; Jackson, Laura; Phillips, Isobel

    2014-02-01

    We propose that hearing a proper name (e.g., Kevin) in a particular voice serves as a compound memory cue that directly activates representations of a mutually known target person, often permitting reference resolution without any complex computation of shared knowledge. In a referential communication study, pairs of friends played a communication game, in which we monitored the eyes of one friend (the addressee) while he or she sought to identify the target person, in a set of four photos, on the basis of a name spoken aloud. When the name was spoken by a friend, addressees rapidly identified the target person, and this facilitation was independent of whether the friend was articulating a message he or she had designed versus one from a third party with whom the target person was not shared. Our findings suggest that the comprehension system takes advantage of regularities in the environment to minimize effortful computation about who knows what.

  18. The Argonne Leadership Computing Facility 2010 annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drugan, C.

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued tomore » provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale-class computers by a factor of a thousand. Pete Beckman, who served as the ALCF's Director for the past few years, has been named director of the newly created Exascale Technology and Computing Institute (ETCi). The institute will focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems. Just as Pete's leadership propelled the ALCF to great success, we know that that ETCi will benefit immensely from his expertise and experience. Without question, the future of supercomputing is certainly in good hands. I would like to thank Pete for all his effort over the past two years, during which he oversaw the establishing of ALCF2, the deployment of the Magellan project, increases in utilization, availability, and number of projects using ALCF1. He managed the rapid growth of ALCF staff and made the facility what it is today. All the staff and users are better for Pete's efforts.« less

  19. The exploration study of fire damage to concrete specimen using x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Su, Yu-Min; Lee, Min-Gin; Chen, Guan-Ying

    2015-04-01

    Portland Cement Concrete (PCC) loses the evaporable water at about 100 °C, decomposes C-S-H at about 200 °C, and dehydrates CH at about 500 °C, and deconstruct C-S-H at about 900°C. The concrete degradation or cracks are caused by several possible parameters, such as vapor pressure in pores, thermal gradient, and varied expansion rates of cement pastes and aggregates. The objective of the exploration study was to assess the porosity before and after conditioning of high temperature in the laboratory with the medical X-ray computed tomography. The experimental program was determined to identify the mineral properties of the aggregates used and determine the consensus properties of compressive, splitting tensile, and flexural strengths. Concrete cylinders were subject with one temperature conditioning, namely 400°C, but two different heat conditioning time namely four and eight hours. The X-ray CT, before and after high temperature conditioning, was administrated on the concrete cylinders to inspect the depth of the damage zone, which shall consist of more porosity than undamaged one. The damage zone will be examined and identified through the changes in porosity of concrete paste and aggregates within a concrete cylinder. The significance of the exploration study was to provide an in-depth insight to define the damaged zone for a better understanding of the following repairing and reinforced work.

  20. Does gender discrimination exist in a gynecology training program in a private hospital?

    PubMed

    Geisler, J P; Mernitz, C S; Geisler, M J; Harsha, C G; Eskew, P N

    1999-01-01

    Does gender discrimination by attending physicians exists in a residency in regard to residents' opportunities to perform complete/operative management of hysterectomies versus just being surgical assistants? The program studied is a 4-year program in obstetrics and gynecology residency with 3 residents per year. All cases involving a resident were recorded in a computer program designed by one of the authors (C.S.M.) to collect data for Residency Review Committee reports. Data were able to be sorted in a variety of methods including level of management, date of procedure, Physicians' Current Procedural Terminology codes, and attending physician name or resident name. Only intrafascial and extrafascial hysterectomies for benign disease were included in the study. Data were collected from July 1, 1996 to March 31, 1997. Five hundred and forty-nine hysterectomies with residents participating as primary surgeon (complete/operative management) or surgical assistant were performed during the study period. Complete/operative management was performed by the resident in 82.5% of cases while the resident was surgical assistant in 17.5%. Male residents were responsible for complete/operative management in 81.6% of cases and female residents in 83.2% of cases (P = 0.33). Male attending physicians were more likely to allow residents (male or female) to participate as the primary surgeon in abdominal hysterectomies (95.3%) and vaginal hysterectomies (68.5%) than female attending physicians (abdominal, 87.0% and vaginal, 57.3%) (P < 0.001 and P = 0.006, respectively). Although male attending physicians were more likely than female attending physicians to allow residents to perform complete/operative management, there was no discrimination as to whether the resident in question was male or female. When determining the level of management private gynecologists will allow residents to perform they do not practice gender discrimination.

  1. A two-step method for developing a control rod program for boiling water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1992-01-01

    This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in amore » computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift.« less

  2. Navy CG(X) Cruiser Program: Background for Congress

    DTIC Science & Technology

    2010-06-10

    PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e . TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES...Navy’s Top Officer Sees Lessons in Shipbuilding Program Failures,” GovernmentExecutive.com, September 24, 2008) quoted Admiral Gary Roughead, the Chief...procuring CG(X)s was properly aligned with foreign-country ballistic missile development programs. A 2005 defense trade press report, for example, stated

  3. Poster — Thur Eve — 69: Computational Study of DVH-guided Cancer Treatment Planning Optimization Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghomi, Pooyan Shirvani; Zinchenko, Yuriy

    2014-08-15

    Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less

  4. National Ridesharing Demonstration Program : `Maxi-Taxi' Services in the Tidewater Region of Virginia

    DOT National Transportation Integrated Search

    1985-07-01

    In November 1980, the Tidewater Regional Transit Authority (TRT) Implemented a series of shared-ride taxi services under the program name "Maxi-Taxi" (name which was subsequently changed to Maxi-Ride to avert legal challenges). These services, suppli...

  5. 77 FR 51571 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-24

    ... Music and Data Processing Devices, Computers, and Components Thereof; Notice of Receipt of Complaint... complaint entitled Wireless Communication Devices, Portable Music and Data Processing Devices, Computers..., portable music and data processing devices, computers, and components thereof. The complaint names as...

  6. Bin-Carver: Automatic Recovery of Binary Executable Files

    DTIC Science & Technology

    2012-05-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Texas A&M University,Department of Computer Science and Engineering,College Station,TX,77840 8...PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT...least 23 4K data blocks) and observed how this binary file gets organized in a brand new disk. We found that this simple ls file actually gets

  7. Human and Organizational Risk Modeling: Critical Personnel and Leadership in Network Organizations

    DTIC Science & Technology

    2006-08-01

    NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Carnegie Mellon University,School of Computer...Science,Pittsburgh,PA,15213 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S...organization can help improve performance and protect against the risk of loss. But the study of critical personnel has traditionally used static structural

  8. NNEPEQ: Chemical equilibrium version of the Navy/NASA Engine Program

    NASA Technical Reports Server (NTRS)

    Fishbach, Laurence H.; Gordon, Sanford

    1988-01-01

    The Navy NASA Engine Program, NNEP, currently is in use at a large number of government agencies, commercial companies and universities. This computer code has bee used extensively to calculate the design and off-design (matched) performance of a broad range of turbine engines, ranging from subsonic turboprops to variable cycle engines for supersonic transports. Recently, there has been increased interest in applications for which NNEP was not capable of simulating, namely, high Mach applications, alternate fuels including cryogenics, and cycles such as the gas generator air-turbo-rocker (ATR). In addition, there is interest in cycles employing ejectors such as for military fighters. New engine component models had to be created for incorporation into NNEP, and it was found necessary to include chemical dissociation effects of high temperature gases. The incorporation of these extended capabilities into NNEP is discussed and some of the effects of these changes are illustrated.

  9. Fatigue crack growth model RANDOM2 user manual, appendix 1

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    The FORTRAN program RANDOM2 is documented. RANDOM2 is based on fracture mechanics using a probabilistic fatigue crack growth model. It predicts the random lifetime of an engine component to reach a given crack size. Included in this user manual are details regarding the theoretical background of RANDOM2, input data, instructions and a sample problem illustrating the use of RANDOM2. Appendix A gives information on the physical quantities, their symbols, FORTRAN names, and both SI and U.S. Customary units. Appendix B includes photocopies of the actual computer printout corresponding to the sample problem. Appendices C and D detail the IMSL, Ver. 10(1), subroutines and functions called by RANDOM2 and a SAS/GRAPH(2) program that can be used to plot both the probability density function (p.d.f.) and the cumulative distribution function (c.d.f.).

  10. Coding conventions and principles for a National Land-Change Modeling Framework

    USGS Publications Warehouse

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  11. SSME Bearing and Seal Tester Data Compilation, Analysis and Reporting; and Refinement of the Cryogenic Bearing Analysis Mathematical Model

    NASA Technical Reports Server (NTRS)

    Moore, James; Marty, Dave; Cody, Joe

    2000-01-01

    SRS and NASA/MSFC have developed software with unique capabilities to couple bearing kinematic modeling with high fidelity thermal modeling. The core thermomechanical modeling software was developed by SRS and others in the late 1980's and early 1990's under various different contractual efforts. SRS originally developed software that enabled SHABERTH (Shaft Bearing Thermal Model) and SINDA (Systems Improved Numerical Differencing Analyzer) to exchange data and autonomously allowing bearing component temperature effects to propagate into the steady state bearing mechanical model. A separate contract was issued in 1990 to create a personal computer version of the software. At that time SRS performed major improvements to the code. Both SHABERTH and SINDA were independently ported to the PC and compiled. SRS them integrated the two programs into a single program that was named SINSHA. This was a major code improvement.

  12. SSME Bearing and Seal Tester Data Compilation, Analysis, and Reporting; and Refinement of the Cryogenic Bearing Analysis Mathematical Model

    NASA Technical Reports Server (NTRS)

    Moore, James; Marty, Dave; Cody, Joe

    2000-01-01

    SRS and NASA/MSFC have developed software with unique capabilities to couple bearing kinematic modeling with high fidelity thermal modeling. The core thermomechanical modeling software was developed by SRS and others in the late 1980's and early 1990's under various different contractual efforts. SRS originally developed software that enabled SHABERTH (Shaft Bearing Thermal Model) and SINDA (Systems Improved Numerical Differencing Analyzer) to exchange data and autonomously allowing bearing component temperature effects to propagate into the steady state bearing mechanical model. A separate contract was issued in 1990 to create a personal computer version of the software. At that time SRS performed major improvements to the code. Both SHABERTH and SINDA were independently ported to the PC and compiled. SRS them integrated the two programs into a single program that was named SINSHA. This was a major code improvement.

  13. NNEPEQ - Chemical equilibrium version of the Navy/NASA Engine Program

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.; Gordon, S.

    1989-01-01

    The Navy NASA Engine Program, NNEP, currently is in use at a large number of government agencies, commercial companies and universities. This computer code has been used extensively to calculate the design and off-design (matched) performance of a broad range of turbine engines, ranging from subsonic turboprops to variable cycle engines for supersonic transports. Recently, there has been increased interest in applications for which NNEP was not capable of simulating, namely, high Mach applications, alternate fuels including cryogenics, and cycles such as the gas generator air-turbo-rocker (ATR). In addition, there is interest in cycles employing ejectors such as for military fighters. New engine component models had to be created for incorporation into NNEP, and it was found necessary to include chemical dissociation effects of high temperature gases. The incorporation of these extended capabilities into NNEP is discussed and some of the effects of these changes are illustrated.

  14. Art at the Bedside: Reflections on Use of Visual Imagery in Hospital Chaplaincy.

    PubMed

    Dodge-Peters Daiss, Susan

    2016-03-01

    'Art at the Bedside' is the name given to a hospital visitation program during which works of art loaded onto a computer are used to start conversations with patients and their families. The article traces the genesis of the program that evolved from the author's dual training in art museum education and hospital chaplaincy through the evolution of the practice, now in its sixth year. Reflections on the practice itself are the focus of this article, from identifying the kinds of responses frequently elicited by the artwork to understanding how these works of art seem to forge immediate connections between the patient and the facilitator. Ultimately posed in this reflection is whether the 'Art at the Bedside' experience might suggest a future for the integration of the visual arts more broadly into hospital - and related - chaplaincy. © The Author(s) 2016.

  15. Identification and description of low-molecular weight chemicals inducing hypersensitivity in man

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mari, A.; Misiti, D.; Dorello-Misiti, P.

    1988-10-01

    The purpose of this study is to compose a list of allergenic chemicals. Each chemical is described in a monograph. The objective of such a monograph program is to collect from the international scientific literature available relevant experimental, chemical, and epidemiological data on chemicals to which humans are known to be exposed and sensitized. A list of 721 chemicals, with related synonyms and trade names, that induce allergic responses and hypersensitivities was prepared. The chemicals were selected on the basis of evidence of human exposure and sensitization. Each monograph contains several data considered relevant to the evaluation of the sensitizingmore » hazards of chemical substances. The data are divided in three sections: chemical identity, sensitizing power, and occurrence. All the data contained in the monographs along with the references and the synonyms are stored in a database application computer program. Preliminary results of 308 of 721 monographs analyzed are reported.« less

  16. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  17. Architectural requirements for the Red Storm computing system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camp, William J.; Tomkins, James Lee

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latencymore » interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.« less

  18. 2009 ALCF annual report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, P.; Martin, D.; Drugan, C.

    2010-11-23

    This year the Argonne Leadership Computing Facility (ALCF) delivered nearly 900 million core hours of science. The research conducted at their leadership class facility touched our lives in both minute and massive ways - whether it was studying the catalytic properties of gold nanoparticles, predicting protein structures, or unearthing the secrets of exploding stars. The authors remained true to their vision to act as the forefront computational center in extending science frontiers by solving pressing problems for our nation. Our success in this endeavor was due mainly to the Department of Energy's (DOE) INCITE (Innovative and Novel Computational Impact onmore » Theory and Experiment) program. The program awards significant amounts of computing time to computationally intensive, unclassified research projects that can make high-impact scientific advances. This year, DOE allocated 400 million hours of time to 28 research projects at the ALCF. Scientists from around the world conducted the research, representing such esteemed institutions as the Princeton Plasma Physics Laboratory, National Institute of Standards and Technology, and European Center for Research and Advanced Training in Scientific Computation. Argonne also provided Director's Discretionary allocations for research challenges, addressing such issues as reducing aerodynamic noise, critical for next-generation 'green' energy systems. Intrepid - the ALCF's 557-teraflops IBM Blue/Gene P supercomputer - enabled astounding scientific solutions and discoveries. Intrepid went into full production five months ahead of schedule. As a result, the ALCF nearly doubled the days of production computing available to the DOE Office of Science, INCITE awardees, and Argonne projects. One of the fastest supercomputers in the world for open science, the energy-efficient system uses about one-third as much electricity as a machine of comparable size built with more conventional parts. In October 2009, President Barack Obama recognized the excellence of the entire Blue Gene series by awarding it to the National Medal of Technology and Innovation. Other noteworthy achievements included the ALCF's collaboration with the National Energy Research Scientific Computing Center (NERSC) to examine cloud computing as a potential new computing paradigm for scientists. Named Magellan, the DOE-funded initiative will explore which science application programming models work well within the cloud, as well as evaluate the challenges that come with this new paradigm. The ALCF obtained approval for its next-generation machine, a 10-petaflops system to be delivered in 2012. This system will allow us to resolve ever more pressing problems, even more expeditiously through breakthrough science in the years to come.« less

  19. Northwest plant names and symbols for ecosystem inventory and analysis.

    Treesearch

    G.A. Garrison; J.M. Skovlin; C.E. Poulton; A.H. Winward

    1976-01-01

    This paper is basically an alpha code and name listing of forest and rangeland grasses, sedges, rushes, forbs, shrubs, and trees of Oregon, Washington, and Idaho. The code expedites recording of vegetation inventory data and is especially useful to those processing their data by contemporary computer systems. Editorial and secretarial personnel will find the name and...

  20. Attention, Exposure Duration, and Gaze Shifting in Naming Performance

    ERIC Educational Resources Information Center

    Roelofs, Ardi

    2011-01-01

    Two experiments are reported in which the role of attribute exposure duration in naming performance was examined by tracking eye movements. Participants were presented with color-word Stroop stimuli and left- or right-pointing arrows on different sides of a computer screen. They named the color attribute and shifted their gaze to the arrow to…

  1. Interdisciplinary Distinguished Seminar Series

    DTIC Science & Technology

    2014-08-29

    official Department of the Army position, policy or decision, unless so designated by other documentation. 9. SPONSORING/MONITORING AGENCY NAME(S) AND...Received Book TOTAL: Patents Submitted Patents Awarded Awards Graduate Students Names of Post Doctorates Names of Faculty Supported Names of Under...capabilities, estimation and optimization techniques, image and color standards, efficient programming methods and efficient ASIC designs . This seminar will

  2. Colleges or Universities with L.D. Programs.

    ERIC Educational Resources Information Center

    Association for Children and Adults with Learning Disabilities, Pittsburgh, PA.

    The listing describes approximately 50 colleges and universities with programs for learning disabled (LD) students. Descriptions are arranged alphabetically by state and include the college's name, address, telephone number, name of contact person, and brief description. Among services listed are textbooks on cassette; academic, career, and…

  3. 48 CFR 552.216-73 - Ordering Information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... transmission or □ computer-to-computer Electronic Data Interchange (EDI). (b) An offeror electing to receive computer-to-computer EDI is requested to indicate below the name, address, and telephone number of the representative to be contacted regarding establishment of an EDI interface. (c) An offeror electing to receive...

  4. Users manual for the IMA program. Appendix C: Profile design program listing

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The source code for the Profile Design Program (PDP) for the Impulsive Mission Analysis (IMA) program is divided into several files. In a similar manner, the FORTRAN listings of the PDP's subroutines and function routines are organized into several groups in this appendix. Within each group, the FORTRAN listings are ordered alphabetically by routine name. Names and brief descriptions of each routine are listed in the same order as the Fortran listings.

  5. Elections: DOD Can Strengthen Evaluation of Its Absentee Voting Assistance Program

    DTIC Science & Technology

    2010-06-01

    Evaluation of Its Absentee Voting Assistance Program June 2010 GAO-10-476 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public...4. TITLE AND SUBTITLE Elections: DOD Can Strengthen Evaluation of Its Absentee Voting Assistance Program 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  6. Control mechanism of double-rotator-structure ternary optical computer

    NASA Astrophysics Data System (ADS)

    Kai, SONG; Liping, YAN

    2017-03-01

    Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.

  7. XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations

    NASA Astrophysics Data System (ADS)

    Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.

    2013-01-01

    XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem

  8. Local Flood Proofing Programs

    DTIC Science & Technology

    2005-02-01

    Carolina, funded its flood audits and other flood protection projects with stormwater utility income. Impact fees: Impact fees are contributions...determining appropriate projects . Local Flood Proofing Programs – 68 – February 2005 Bolingbrook’s Flood Audit Bolingbrook, Illinois, has used different...GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND

  9. Foreign Language Translation of Chemical Nomenclature by Computer

    PubMed Central

    2009-01-01

    Chemical compound names remain the primary method for conveying molecular structures between chemists and researchers. In research articles, patents, chemical catalogues, government legislation, and textbooks, the use of IUPAC and traditional compound names is universal, despite efforts to introduce more machine-friendly representations such as identifiers and line notations. Fortunately, advances in computing power now allow chemical names to be parsed and generated (read and written) with almost the same ease as conventional connection tables. A significant complication, however, is that although the vast majority of chemistry uses English nomenclature, a significant fraction is in other languages. This complicates the task of filing and analyzing chemical patents, purchasing from compound vendors, and text mining research articles or Web pages. We describe some issues with manipulating chemical names in various languages, including British, American, German, Japanese, Chinese, Spanish, Swedish, Polish, and Hungarian, and describe the current state-of-the-art in software tools to simplify the process. PMID:19239237

  10. JUPITER: Joint Universal Parameter IdenTification and Evaluation of Reliability - An Application Programming Interface (API) for Model Analysis

    USGS Publications Warehouse

    Banta, Edward R.; Poeter, Eileen P.; Doherty, John E.; Hill, Mary C.

    2006-01-01

    he Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER API) improves the computer programming resources available to those developing applications (computer programs) for model analysis.The JUPITER API consists of eleven Fortran-90 modules that provide for encapsulation of data and operations on that data. Each module contains one or more entities: data, data types, subroutines, functions, and generic interfaces. The modules do not constitute computer programs themselves; instead, they are used to construct computer programs. Such computer programs are called applications of the API. The API provides common modeling operations for use by a variety of computer applications.The models being analyzed are referred to here as process models, and may, for example, represent the physics, chemistry, and(or) biology of a field or laboratory system. Process models commonly are constructed using published models such as MODFLOW (Harbaugh et al., 2000; Harbaugh, 2005), MT3DMS (Zheng and Wang, 1996), HSPF (Bicknell et al., 1997), PRMS (Leavesley and Stannard, 1995), and many others. The process model may be accessed by a JUPITER API application as an external program, or it may be implemented as a subroutine within a JUPITER API application . In either case, execution of the model takes place in a framework designed by the application programmer. This framework can be designed to take advantage of any parallel processing capabilities possessed by the process model, as well as the parallel-processing capabilities of the JUPITER API.Model analyses for which the JUPITER API could be useful include, for example: Compare model results to observed values to determine how well the model reproduces system processes and characteristics.Use sensitivity analysis to determine the information provided by observations to parameters and predictions of interest.Determine the additional data needed to improve selected model predictions.Use calibration methods to modify parameter values and other aspects of the model.Compare predictions to regulatory limits.Quantify the uncertainty of predictions based on the results of one or many simulations using inferential or Monte Carlo methods.Determine how to manage the system to achieve stated objectives.The capabilities provided by the JUPITER API include, for example, communication with process models, parallel computations, compressed storage of matrices, and flexible input capabilities. The input capabilities use input blocks suitable for lists or arrays of data. The input blocks needed for one application can be included within one data file or distributed among many files. Data exchange between different JUPITER API applications or between applications and other programs is supported by data-exchange files.The JUPITER API has already been used to construct a number of applications. Three simple example applications are presented in this report. More complicated applications include the universal inverse code UCODE_2005 (Poeter et al., 2005), the multi-model analysis MMA (Eileen P. Poeter, Mary C. Hill, E.R. Banta, S.W. Mehl, and Steen Christensen, written commun., 2006), and a code named OPR_PPR (Matthew J. Tonkin, Claire R. Tiedeman, Mary C. Hill, and D. Matthew Ely, written communication, 2006).This report describes a set of underlying organizational concepts and complete specifics about the JUPITER API. While understanding the organizational concept presented is useful to understanding the modules, other organizational concepts can be used in applications constructed using the JUPITER API.

  11. A study on automated anatomical labeling to arteries concerning with colon from 3D abdominal CT images

    NASA Astrophysics Data System (ADS)

    Hoang, Bui Huy; Oda, Masahiro; Jiang, Zhengang; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Mori, Kensaku

    2011-03-01

    This paper presents an automated anatomical labeling method of arteries extracted from contrasted 3D CT images based on multi-class AdaBoost. In abdominal surgery, understanding of vasculature related to a target organ such as the colon is very important. Therefore, the anatomical structure of blood vessels needs to be understood by computers in a system supporting abdominal surgery. There are several researches on automated anatomical labeling, but there is no research on automated anatomical labeling to arteries concerning with the colon. The proposed method obtains a tree structure of arteries from the artery region and calculates features values of each branch. These feature values are thickness, curvature, direction, and running vectors of branch. Then, candidate arterial names are computed by classifiers that are trained to output artery names. Finally, a global optimization process is applied to the candidate arterial names to determine final names. Target arteries of this paper are nine lower abdominal arteries (AO, LCIA, RCIA, LEIA, REIA, SMA, IMA, LIIA, RIIA). We applied the proposed method to 14 cases of 3D abdominal contrasted CT images, and evaluated the results by leave-one-out scheme. The average precision and recall rates of the proposed method were 87.9% and 93.3%, respectively. The results of this method are applicable for anatomical name display of surgical simulation and computer aided surgery.

  12. Scalable and portable visualization of large atomistic datasets

    NASA Astrophysics Data System (ADS)

    Sharma, Ashish; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2004-10-01

    A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms. Program summaryTitle of program: Atomsviewer Catalogue identifier: ADUM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics card Operating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5 Programming languages used: C++, C and OpenGL Memory required to execute with typical data: 1 gigabyte of RAM High speed storage required: 60 gigabytes No. of lines in the distributed program including test data, etc.: 550 241 No. of bytes in the distributed program including test data, etc.: 6 258 245 Number of bits in a word: Arbitrary Number of processors used: 1 Has the code been vectorized or parallelized: No Distribution format: tar gzip file Nature of physical problem: Scientific visualization of atomic systems Method of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data minimization, and levels-of-detail for minimal rendering Restrictions on the complexity of the problem: None Typical running time: The program is interactive in its execution Unusual features of the program: None References: The conceptual foundation and subsequent implementation of the algorithms are found in [A. Sharma, A. Nakano, R.K. Kalia, P. Vashishta, S. Kodiyalam, P. Miller, W. Zhao, X.L. Liu, T.J. Campbell, A. Haas, Presence—Teleoperators and Virtual Environments 12 (1) (2003)].

  13. Automated generation of a World Wide Web-based data entry and check program for medical applications.

    PubMed

    Kiuchi, T; Kaihara, S

    1997-02-01

    The World Wide Web-based form is a promising method for the construction of an on-line data collection system for clinical and epidemiological research. It is, however, laborious to prepare a common gateway interface (CGI) program for each project, which the World Wide Web server needs to handle the submitted data. In medicine, it is even more laborious because the CGI program must check deficits, type, ranges, and logical errors (bad combination of data) of entered data for quality assurance as well as data length and meta-characters of the entered data to enhance the security of the server. We have extended the specification of the hypertext markup language (HTML) form to accommodate information necessary for such data checking and we have developed software named AUTOFORM for this purpose. The software automatically analyzes the extended HTML form and generates the corresponding ordinary HTML form, 'Makefile', and C source of CGI programs. The resultant CGI program checks the entered data through the HTML form, records them in a computer, and returns them to the end-user. AUTOFORM drastically reduces the burden of development of the World Wide Web-based data entry system and allows the CGI programs to be more securely and reliably prepared than had they been written from scratch.

  14. Effects of a Tablet-Based Home Practice Program With Telepractice on Treatment Outcomes in Chronic Aphasia.

    PubMed

    Kurland, Jacquie; Liu, Anna; Stokes, Polly

    2018-05-17

    The aim of this study was to determine if a tablet-based home practice program with weekly telepractice support could enable long-term maintenance of recent treatment gains and foster new language gains in poststroke aphasia. In a pre-post group study of home practice outcomes, 21 individuals with chronic aphasia were examined before and after a 6-month home practice phase and again at follow-up 4 months later. The main outcome measure studied was change in naming previously treated or untreated, practiced or unpracticed pictures of objects and actions. Individualized home practice programs were created in iBooks Author with semantic, phonemic, and orthographic cueing in pictures, words, and videos in order to facilitate naming of previously treated or untreated pictures. Home practice was effective for all participants with severity moderating treatment effects, such that individuals with the most severe aphasia made and maintained fewer gains. There was a negative relationship between the amount of training required for iPad proficiency and improvements on practiced and unpracticed pictures and a positive relationship between practice compliance and same improvements. Unsupervised home practice with weekly video teleconferencing support is effective. This study demonstrates that even individuals with chronic severe aphasia, including those with no prior smart device or even computer experience, can attain independent proficiency to continue practicing and improving their language skills beyond therapy discharge. This could represent a low-cost therapy option for individuals without insurance coverage and/or those for whom mobility is an obstacle to obtaining traditional aphasia therapy.

  15. Incremental Lexical Learning in Speech Production: A Computational Model and Empirical Evaluation

    ERIC Educational Resources Information Center

    Oppenheim, Gary Michael

    2011-01-01

    Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have…

  16. Kaleidoscope Name Design

    ERIC Educational Resources Information Center

    Laird, Shirley

    2011-01-01

    It's not that younger students can't master a project; it is that they have trouble coming up with a design for the task. What are they more familiar with than their name? The author thus decided to use names as part of a transfer lesson. She gave her students a piece of computer paper printed with a triangular shape that had a 45-degree angle.…

  17. French Interim MALE UAV Program

    DTIC Science & Technology

    2003-09-02

    MINISTÈRE DE LA DÉFENSE June, 13th 2002 Lcl Monsterleet FAF Staff J. Caron EADS S&DE-ISR FRENCH INTERIM MALE UAV PROGRAM 4 INDUSTRIAL STATUS Report...2003 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE French Interim Male UAV Program 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) EADS

  18. Segmentation and Estimation of the Histological Composition of the Tumor Mass in Computed Tomographic Images of Neuroblastoma

    DTIC Science & Technology

    2001-10-25

    a CT image, each voxel contains an integer number which is the CT value, in Hounsfield units (HU), of the voxel. Therefore, the standard method of...Task Number Work Unit Number Performing Organization Name(s) and Address(es) Department of Electrical and Computer Engineering, University of...34, Journal of Pediatric Surgery, vol 24(7), pp. 708-711, 1989. [4] I. N. Bankman, editor, Handbook of Medical Image Analysis, Academic Press, London, UK

  19. ERDC MSRC Resource. High Performance Computing for the Warfighter. Spring 2006

    DTIC Science & Technology

    2006-01-01

    named Ruby, and the HP/Compaq SC45, named Emerald , continue to add their unique sparkle to the ERDC MSRC computer infrastructure. ERDC invited the...configuration on B-52H purchased additional memory for the login nodes so that this part of the solution process could be done as a preprocessing step. On...application and system services. Of the service nodes, 10 are login nodes and 23 are input/output (I/O) server nodes for the Lustre file system (i.e., the

  20. Fast, Inclusive Searches for Geographic Names Using Digraphs

    USGS Publications Warehouse

    Donato, David I.

    2008-01-01

    An algorithm specifies how to quickly identify names that approximately match any specified name when searching a list or database of geographic names. Based on comparisons of the digraphs (ordered letter pairs) contained in geographic names, this algorithmic technique identifies approximately matching names by applying an artificial but useful measure of name similarity. A digraph index enables computer name searches that are carried out using this technique to be fast enough for deployment in a Web application. This technique, which is a member of the class of n-gram algorithms, is related to, but distinct from, the soundex, PHONIX, and metaphone phonetic algorithms. Despite this technique's tendency to return some counterintuitive approximate matches, it is an effective aid for fast, inclusive searches for geographic names when the exact name sought, or its correct spelling, is unknown.

  1. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.

    1990-01-01

    This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.

  2. Text-based CAPTCHAs over the years

    NASA Astrophysics Data System (ADS)

    Chow, Y. W.; Susilo, W.

    2017-11-01

    The notion of CAPTCHAs has been around for more than two decades. Since its introduction, CAPTCHAs have now become a ubiquitous part of the Internet. Over the years, research on various aspects of CAPTCHAs has evolved and different design principles have emerged. This article discusses text-based CAPTCHAs in terms of their fundamental requirements, namely, security and usability. Practicality necessitates that humans must be able to correctly solve CAPTCHA challenges, while at the same time automated computer programs should have difficulty solving the challenges. This article also presents alternative paradigms to text-based CAPTCHA design that have been examined in previous work. With the advances in techniques to defeat CAPTCHAs, the future of auto- mated Turing tests is an open question.

  3. 77 FR 65417 - Proposal Review Panel for Computing Communication Foundations; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-26

    ...: To assess the progress of the EIC Award, ``Collaborative Research: Computational Behavioral Science... NATIONAL SCIENCE FOUNDATION Proposal Review Panel for Computing Communication Foundations; Notice... National Science Foundation announces the following meeting: Name: Site Visit, Proposal Panel Review for...

  4. 25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...

  5. 25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...

  6. 25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...

  7. Coast Guard Deepwater Acquisition Programs: Background, Oversight Issues, and Options for Congress

    DTIC Science & Technology

    2009-12-23

    NUMBER 5e . TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Congressional Research Service,Library of Congress...Northrop Grumman Ship Systems ( NGSS ). ICGS was awarded an indefinite delivery, indefinite quantity (ID/IQ) contract for the Deepwater program that...sustainment is not a Deepwater program but is displayed to align with the FY2009 Consolidated Security, Disaster Assistance, and Continuing Appropriations

  8. A Primal DPG Method Without a First Order Reformulation

    DTIC Science & Technology

    2013-05-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) University of Texas at Austin,Institute for Computational Engineering and Sciences,Austin,TX,78712 8...PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS( ES ) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR...J. GOPALAKRISHNAN 100 102 104 106 10−8 10−6 10−4 10−2 100 102 Square domain: h and p convergence # Degrees of Freedom R el at iv e er ro r i n H1

  9. Analysis of a Probabilistic Model of Redundancy in Unsupervised Information Extraction

    DTIC Science & Technology

    2010-08-25

    5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) University of Washington,Department of Computer Science and Engineering...Box 352350,Seattle,WA,98195 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS( ES ) 10. SPONSOR/MONITOR’S...approximation, with algebra we have: PUSC(x ∈ C|x appears k times inndraws) ≈ 1 1 + |E||C| ( pE pC )ken(pC−pE) . (2) In general, we expect the extraction

  10. Cloud computing basics for librarians.

    PubMed

    Hoy, Matthew B

    2012-01-01

    "Cloud computing" is the name for the recent trend of moving software and computing resources to an online, shared-service model. This article briefly defines cloud computing, discusses different models, explores the advantages and disadvantages, and describes some of the ways cloud computing can be used in libraries. Examples of cloud services are included at the end of the article. Copyright © Taylor & Francis Group, LLC

  11. 77 FR 72335 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... computer networks, systems, or databases. The records contain the individual's name; social security number... control and track access to DLA-controlled networks, computer systems, and databases. The records may also...

  12. What's in a Name? Changing Names and Challenges to Professional Identification

    ERIC Educational Resources Information Center

    Alber, Julia; Chaney, Don; O'Rourke, Thomas W.

    2013-01-01

    Name changes of university departments that have professional preparation health education programs have been ongoing and significant. This study analyzes changes in the names of health education degree-offering departments between 1974 and 2009. It also discusses the implications for the health education discipline going forward with respect to…

  13. EarthCube - Earth System Bridge: Spanning Scientific Communities with Interoperable Modeling Frameworks

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.; DeLuca, C.; Gochis, D. J.; Arrigo, J.; Kelbert, A.; Choi, E.; Dunlap, R.

    2014-12-01

    In order to better understand and predict environmental hazards of weather/climate, ecology and deep earth processes, geoscientists develop and use physics-based computational models. These models are used widely both in academic and federal communities. Because of the large effort required to develop and test models, there is widespread interest in component-based modeling, which promotes model reuse and simplified coupling to tackle problems that often cross discipline boundaries. In component-based modeling, the goal is to make relatively small changes to models that make it easy to reuse them as "plug-and-play" components. Sophisticated modeling frameworks exist to rapidly couple these components to create new composite models. They allow component models to exchange variables while accommodating different programming languages, computational grids, time-stepping schemes, variable names and units. Modeling frameworks have arisen in many modeling communities. CSDMS (Community Surface Dynamics Modeling System) serves the academic earth surface process dynamics community, while ESMF (Earth System Modeling Framework) serves many federal Earth system modeling projects. Others exist in both the academic and federal domains and each satisfies design criteria that are determined by the community they serve. While they may use different interface standards or semantic mediation strategies, they share fundamental similarities. The purpose of the Earth System Bridge project is to develop mechanisms for interoperability between modeling frameworks, such as the ability to share a model or service component. This project has three main goals: (1) Develop a Framework Description Language (ES-FDL) that allows modeling frameworks to be described in a standard way so that their differences and similarities can be assessed. (2) Demonstrate that if a model is augmented with a framework-agnostic Basic Model Interface (BMI), then simple, universal adapters can go from BMI to a modeling framework's native component interface. (3) Create semantic mappings between modeling frameworks that support semantic mediation. This third goal involves creating a crosswalk between the CF Standard Names and the CSDMS Standard Names (a set of naming conventions). This talk will summarize progress towards these goals.

  14. 76 FR 43278 - Privacy Act; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-20

    ... computer (PC). The Security Management Officer's office remains locked when not in use. RETENTION AND... records to include names, addresses, social security numbers, service computation dates, leave usage data... that resides on a desktop computer. RETRIEVABILITY: Records maintained in file folders are indexed and...

  15. Effects of a Brief but Intensive Remedial Computer Intervention in a Sub-Sample of Kindergartners with Early Literacy Delays

    ERIC Educational Resources Information Center

    Van der Kooy-Hofland, Verna A. C.; Bus, Adriana G.; Roskos, Kathleen

    2012-01-01

    Living Letters is an adaptive game designed to promote children's combining of how the proper name sounds with their knowledge of how the name looks. A randomized controlled trial (RCT) was used to experimentally test whether priming for attending to the sound-symbol relationship in the proper name can reduce the risk for developing reading…

  16. DIRAC: A new version of computer algebra tools for studying the properties and behavior of hydrogen-like ions

    NASA Astrophysics Data System (ADS)

    McConnell, Sean; Fritzsche, Stephan; Surzhykov, Andrey

    2010-03-01

    During recent years, the DIRAC package has proved to be an efficient tool for studying the structural properties and dynamic behavior of hydrogen-like ions. Originally designed as a set of MAPLE procedures, this package provides interactive access to the wave and Green's functions in the non-relativistic and relativistic frameworks and supports analytical evaluation of a large number of radial integrals that are required for the construction of transition amplitudes and interaction cross sections. We provide here a new version of the DIRAC program which is developed within the framework of MATHEMATICA (version 6.0). This new version aims to cater to a wider community of researchers that use the MATHEMATICA platform and to take advantage of the generally faster processing times therein. Moreover, the addition of new procedures, a more convenient and detailed help system, as well as source code revisions to overcome identified shortcomings should ensure expanded use of the new DIRAC program over its predecessor. New version program summaryProgram title: DIRAC Catalogue identifier: ADUQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 45 073 No. of bytes in distributed program, including test data, etc.: 285 828 Distribution format: tar.gz Programming language: Mathematica 6.0 or higher Computer: All computers with a license for the computer algebra package Mathematica (version 6.0 or higher) Operating system: Mathematica is O/S independent Classification: 2.1 Catalogue identifier of previous version: ADUQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 165 (2005) 139 Does the new version supersede the previous version?: Yes Nature of problem: Since the early days of quantum mechanics, the "hydrogen atom" has served as one of the key models for studying the structure and dynamics of various quantum systems. Its analytic solutions are frequently used in case studies in atomic and molecular physics, quantum optics, plasma physics, or even in the field of quantum information and computation. Fast and reliable access to functions and properties of the hydrogenic systems are frequently required, in both the non-relativistic and relativistic frameworks. Despite all the knowledge about one-electron ions, providing such an access is not a simple task, owing to the rather complicated mathematical structure of the Schrödinger and especially Dirac equations. Moreover, for analyzing experimental results as well as for performing advanced theoretical studies one often needs (apart from the detailed information on atomic wave- and Green's functions) to be able to calculate a number of integrals involving these functions. Although for many types of transition operators these integrals can be evaluated analytically in terms of special mathematical functions, such an evaluation is usually rather involved and prone to mistakes. Solution method: A set of Mathematica procedures is developed which provides both the non-relativistic and relativistic solutions of the "Hydrogen atom model". It facilitates, moreover, the symbolic evaluation of integrals involved in the calculations of cross sections and transition amplitudes. These procedures are based on a large number of relations among special mathematical functions, information about their integral representations, recurrence formulae and series expansions. Based on this knowledge, the DIRAC tools provide a fast and reliable algebraic (and if necessary, numeric) manipulation of functions and properties of one-electron systems, thus helping to obtain further insight into the behavior of quantum physical systems. Reasons for new version: The original version of the DIRAC program was developed as a toolbox of Maple procedures and was submitted to the CPC library in 2004 (cf. Ref. [1]). Since then DIRAC has found its niche in advanced theoretical studies carried out in realm of heavy ion physics. With the help of this program detailed analysis has been performed, in particular, for the various excitation and ionization processes occurring in relativistic ion-atom collisions [2], the polarization of the characteristic X-ray radiation following radiative electron capture [3], the correlation properties of the two-photon emission from few-electron heavy ions [4], the spin entanglement phenomena in atomic photoionization [5] and even for exploring the vibrational excitations of the heavy nuclei [6]. Although these studies have conclusively proven the potential of the program, they have also illuminated routes for its further enhancement. Apart from certain source code revisions, demand has grown for a new version of DIRAC compatible with the Mathematica platform. The version presented here includes a wider ranging and more user friendly interactive help system, a number of new procedures and reprogramming for greater computational efficiency. Summary of revisions: The most important new capabilities of the DIRAC program since the previous version are: The utilization of the Mathematica (version 6.0) platform. The addition of a number of new procedures. Since the complete list of the new (and updated) procedures can be found in the interactive help library of the program, we mention here only the most important ones: DiracGlobal[] - Displays a list of the current global settings which specify the framework, nuclear charge and the units which are to be used by the DIRAC program. DiracRadialOrbitalMomentum[] - Returns a non-relativistic radial orbital in momentum space for both, the bound and free electron states. DiracSlaterRadial[] - Evaluates the radial Slater integral both, with the non-relativistic and relativistic wavefunctions. In the previous version of the program this procedure was restricted to the non-relativistic framework only. DiracGreensIntegralRadial[] - Evaluates the two-dimensional radial integrals with the wave- and Green's functions both in non-relativistic and relativistic frameworks. DiracAngularMatrixElement[] - Calculates the angular matrix elements for various irreducible tensor operators. The elimination of some redundant procedures. In particular, the previous version supported evaluation of the spherical Bessel functions, Wigner 3j symbols, Clebsch-Gordan coefficients and spherical harmonics functions. These tools are now superseded by in-built procedures of Mathematica. The development of a full featured interactive help system which follows the style of the Mathematica Help Pages. Extensive revision of the source code in order to correct a number of bugs and inconsistencies that have been identified during use of the previous version of Dirac. The DIRAC package is distributed as a compressed tar file from which the DIRAC root directory can be (re-)generated. The root directory contains the source code and help libraries, a "Readme" file, Dirac_Installation_Instructions, as well as the notebook DemonstrationNotebook.nb that includes a number of test cases to illustrate the use of the program. These test cases, which concern the theoretical analysis of wavefunctions and the fine-structure of hydrogen-like ions, has already been discussed in detail in Ref. [1] and are provided here in order to underline the continuity between the previous (Maple) and new (Mathematica) versions of the DIRAC program. Unusual features: Even though all basic features of the previous Maple version have been retained in as close to the original form as possible, some small syntax changes became necessary in the new version of DIRAC in order to follow Mathematica standards. First of all, these changes concern naming conventions for DIRAC's procedures. As was discussed in Ref. [1], previously rather long names were employed in which each word was separated by an underscore. For example, when running the Maple version of the program one had to call the procedure Dirac_Slater_radial() in order to evaluate the Slater integral. Such a naming convention however, cannot be used in the Mathematica framework where the underscore character is reserved to represent Blank, a built-in symbol. In the new version of DIRAC we therefore follow the Mathematica convention of delimiting each word in a procedure's name by capitalization. Evaluation of the Slater determinant can be accomplished now simply by entering DiracSlaterRadial[]. Besides procedure names, a new convention is introduced to represent fundamental physical constants. In this version of DIRAC the group of (preset) global variables has changed to resemble their conventional symbols, specifically α, a, e, m, c and ℏ, being the fine structure constant, Bohr radius, electron charge, electron mass, speed of light and the Planck constant respectively. If the numerical evaluator N is wrapped around any of these constants, their numerical values are returned. Running time: Although the program replies promptly upon most requests, the running time also depends on the particular task. For example, computation of (radial) matrix elements involving components of relativistic wavefunctions might require a few seconds of a runtime. A number of test calculations performed regarding this and other tasks clearly indicate that the new version of Dirac requires up to 90% less evaluation time compared to its predecessor. References:A. Surzhykov, P. Koval, S. Fritzsche, Comput. Phys. Comm. 165 (2005) 139. H. Ogawa, et al., Phys. Rev. A 75 (2007) 1. A.V. Maiorova, et al., J. Phys. B: At. Mol. Opt. Phys. 42 (2009) 125003. L. Borowska, A. Surzhykov, Th. Stöhlker, S. Fritzsche, Phys. Rev. A 74 (2006) 062516. T. Radtke, S. Fritzsche, A. Surzhykov, Phys. Rev. A 74 (2006) 032709. A. Pálffy, Z. Harman, A. Surzhykov, U.D. Jentschura, Phys. Rev. A 75 (2007) 012712.

  17. Vibration and stress analysis of soft-bonded shuttle insulation tiles. Modal analysis with compact widely space stringers

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Austin, F.; Levy, A.

    1974-01-01

    An efficient iterative procedure is described for the vibration and modal stress analysis of reusable surface insulation (RSI) of multi-tiled space shuttle panels. The method, which is quite general, is rapidly convergent and highly useful for this application. A user-oriented computer program based upon this procedure and titled RESIST (REusable Surface Insulation Stresses) has been prepared for the analysis of compact, widely spaced, stringer-stiffened panels. RESIST, which uses finite element methods, obtains three dimensional tile stresses in the isolator, arrestor (if any) and RSI materials. Two dimensional stresses are obtained in the tile coating and the stringer-stiffened primary structure plate. A special feature of the program is that all the usual detailed finite element grid data is generated internally from a minimum of input data. The program can accommodate tile idealizations with up to 850 nodes (2550 degrees-of-freedom) and primary structure idealizations with a maximum of 10,000 degrees-of-freedom. The primary structure vibration capability is achieved through the development of a new rapid eigenvalue program named ALARM (Automatic LArge Reduction of Matrices to tridiagonal form).

  18. Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Tangpatiphan, Kritsana; Yokoyama, Akihiko

    This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.

  19. PyFly: A fast, portable aerodynamics simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.

    Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less

  20. PyFly: A fast, portable aerodynamics simulator

    DOE PAGES

    Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.; ...

    2018-03-14

    Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less

  1. A spline-based approach for computing spatial impulse responses.

    PubMed

    Ellis, Michael A; Guenther, Drake; Walker, William F

    2007-05-01

    Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.

  2. Fast parallel tandem mass spectral library searching using GPU hardware acceleration

    PubMed Central

    Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K.; Martin, Daniel B.

    2011-01-01

    Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching) is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment. PMID:21545112

  3. A multi-objective optimization model for hub network design under uncertainty: An inexact rough-interval fuzzy approach

    NASA Astrophysics Data System (ADS)

    Niakan, F.; Vahdani, B.; Mohammadi, M.

    2015-12-01

    This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.

  4. HDF-EOS 5 Validator

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A computer program partly automates the task of determining whether an HDF-EOS 5 file is valid in that it conforms to specifications for such characteristics as attribute names, dimensionality of data products, and ranges of legal data values. ["HDF-EOS" and variants thereof are defined in "Converting EOS Data From HDF-EOS to netCDF" (GSC-15007-1), which is the first of several preceding articles in this issue of NASA Tech Briefs.] Previously, validity of a file was determined in a tedious and error-prone process in which a person examined human-readable dumps of data-file-format information. The present software helps a user to encode the specifications for an HDFEOS 5 file, and then inspects the file for conformity with the specifications: First, the user writes the specifications in Extensible Markup Language (XML) by use of a document type definition (DTD) that is part of the program. Next, the portion of the program (denoted the validator) that performs the inspection is executed, using, as inputs, the specifications in XML and the HDF-EOS 5 file to be validated. Finally, the user examines the output of the validator.

  5. iPractice: piloting the effectiveness of a tablet-based home practice program in aphasia treatment.

    PubMed

    Kurland, Jacquie; Wilkins, Abigail R; Stokes, Polly

    2014-02-01

    The current study investigated the effectiveness of a home practice program based on the iPad (Apple Inc., Cupertino, CA), implemented after 2 weeks of intensive language therapy, for maintaining and augmenting treatment gains in people with chronic poststroke aphasia. Five of eight original participants completed the 6-month home practice program in which they autonomously practiced retrieving words for objects and actions. Half of these words had been trained and half were untrained during therapy. Practice included tasks such as naming to confrontation, repeating from a video model, and picture/word matching presented on an iPad. All participants maintained advances made on words trained during the intensive treatment and additionally were able to learn new words by practicing daily over a 6-month period. The iPad and other tablet devices have great potential for personalized home practice to maintain and augment traditional aphasia rehabilitation. It appears that motivation to use the technology and adequate training are more important factors than age, aphasia type or severity, or prior experience with computers. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  6. Constructed-response matching to sample and spelling instruction.

    PubMed Central

    Dube, W V; McDonald, S J; McIlvane, W J; Mackay, H A

    1991-01-01

    The development of interactive programmed instruction using a microcomputer as a teaching machine is described. The program applied a constructed-response matching-to-sample procedure to computer-assisted spelling instruction and review. On each trial, subjects were presented with a sample stimulus and a choice pool consisting of 10 individual letters. In initial training, sample stimuli were arrays of letters, and subjects were taught to construct identical arrays by touching the matching letters in the choice pool. After generalized constructed-response identity matching was established, pictures (line drawings) of common objects were presented as samples. At first, correct spelling was prompted by also presenting the printed name to be "copied" via identity matching; then the prompts were faded out. The program was implemented with 2 mentally retarded individuals. Assessment trials determined appropriate words for training. Correct spelling was established via the prompt-fading procedure; training trials were interspersed among baseline trials that reviewed and maintained spelling of previously learned words. As new words were learned, they were added to a cumulative baseline to generate an individualized review and practice battery for each subject. PMID:1890049

  7. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  8. Promoting Probabilistic Programming System (PPS) Development in Probabilistic Programming for Advancing Machine Learning (PPAML)

    DTIC Science & Technology

    2018-03-01

    MARCH 2018 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED STINFO COPY AIR FORCE RESEARCH LABORATORY INFORMATION...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Air Force Research Laboratory/RITA DARPA 525 Brooks Road 675 North Randolph Street Rome...1 3.0 METHODS , ASSUMPTIONS, AND PROCEDURES

  9. Navy CG(X) Cruiser Program: Background for Congress

    DTIC Science & Technology

    2010-04-08

    PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e . TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES...Inside the Navy, October 27, 2008. Another press report (Katherine McIntire Peters, “Navy’s Top Officer Sees Lessons in Shipbuilding Program Failures...have changed.37 A related question was whether the schedule for procuring CG(X)s was properly aligned with foreign-country ballistic missile

  10. Navy CG(X) Cruiser Program: Background, Oversight Issues, and Options for Congress

    DTIC Science & Technology

    2008-11-18

    ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e . TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES... Lessons in Shipbuilding Program Failures,” GovernmentExecutive.com, September 24, 2008) quoted Admiral Gary Roughead, the Chief of Naval Operations, as...the schedule for procuring CG(X)s is properly aligned with foreign-country ballistic missile development programs. A 2005 defense trade press report

  11. A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children

    NASA Astrophysics Data System (ADS)

    Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

    2010-02-01

    A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

  12. Displaying CFD Solution Parameters on Arbitrary Cut Planes

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul

    2008-01-01

    USMC6 is a Fortran 90 computer program for post-processing in support of visualization of flows simulated by computational fluid dynamics (CFD). The name "USMC6" is partly an abbreviation of "TetrUSS - USM3D Solution Cutter," reflecting its origin as a post-processor for use with USM3D - a CFD program that is a component of the Tetrahedral Unstructured Software System and that solves the Navier-Stokes equations on tetrahedral unstructured grids. "Cutter" here refers to a capability to acquire and process solution data on (1) arbitrary planes that cut through grid volumes, or (2) user-selected spheroidal, conical, cylindrical, and/or prismatic domains cut from within grids. Cutting saves time by enabling concentration of post-processing and visualization efforts on smaller solution domains of interest. The user can select from among more than 40 flow functions. The cut planes can be trimmed to circular or rectangular shape. The user specifies cuts and functions in a free-format input file using simple and easy-to-remember keywords. The USMC6 command line is simple enough that the slicing process can readily be embedded in a shell script for assembly-line post-processing. The output of USMC6 is a data file ready for plotting.

  13. Computer-Based Training in Math and Working Memory Improves Cognitive Skills and Academic Achievement in Primary School Children: Behavioral Results

    PubMed Central

    Sánchez-Pérez, Noelia; Castillo, Alejandro; López-López, José A.; Pina, Violeta; Puga, Jorge L.; Campoy, Guillermo; González-Salinas, Carmen; Fuentes, Luis J.

    2018-01-01

    Student academic achievement has been positively related to further development outcomes, such as the attainment of higher educational, employment, and socioeconomic aspirations. Among all the academic competences, mathematics has been identified as an essential skill in the field of international leadership as well as for those seeking positions in disciplines related to science, technology, and engineering. Given its positive consequences, studies have designed trainings to enhance children's mathematical skills. Additionally, the ability to regulate and control actions and cognitions, i.e., executive functions (EF), has been associated with school success, which has resulted in a strong effort to develop EF training programs to improve students' EF and academic achievement. The present study examined the efficacy of a school computer-based training composed of two components, namely, working memory and mathematics tasks. Among the advantages of using a computer-based training program is the ease with which it can be implemented in school settings and the ease by which the difficulty of the tasks can be adapted to fit the child's ability level. To test the effects of the training, children's cognitive skills (EF and IQ) and their school achievement (math and language grades and abilities) were evaluated. The results revealed a significant improvement in cognitive skills, such as non-verbal IQ and inhibition, and better school performance in math and reading among the children who participated in the training compared to those children who did not. Most of the improvements were related to training on WM tasks. These findings confirmed the efficacy of a computer-based training that combined WM and mathematics activities as part of the school routines based on the training's impact on children's academic competences and cognitive skills. PMID:29375442

  14. Computer-Based Training in Math and Working Memory Improves Cognitive Skills and Academic Achievement in Primary School Children: Behavioral Results.

    PubMed

    Sánchez-Pérez, Noelia; Castillo, Alejandro; López-López, José A; Pina, Violeta; Puga, Jorge L; Campoy, Guillermo; González-Salinas, Carmen; Fuentes, Luis J

    2017-01-01

    Student academic achievement has been positively related to further development outcomes, such as the attainment of higher educational, employment, and socioeconomic aspirations. Among all the academic competences, mathematics has been identified as an essential skill in the field of international leadership as well as for those seeking positions in disciplines related to science, technology, and engineering. Given its positive consequences, studies have designed trainings to enhance children's mathematical skills. Additionally, the ability to regulate and control actions and cognitions, i.e., executive functions (EF), has been associated with school success, which has resulted in a strong effort to develop EF training programs to improve students' EF and academic achievement. The present study examined the efficacy of a school computer-based training composed of two components, namely, working memory and mathematics tasks. Among the advantages of using a computer-based training program is the ease with which it can be implemented in school settings and the ease by which the difficulty of the tasks can be adapted to fit the child's ability level. To test the effects of the training, children's cognitive skills (EF and IQ) and their school achievement (math and language grades and abilities) were evaluated. The results revealed a significant improvement in cognitive skills, such as non-verbal IQ and inhibition, and better school performance in math and reading among the children who participated in the training compared to those children who did not. Most of the improvements were related to training on WM tasks. These findings confirmed the efficacy of a computer-based training that combined WM and mathematics activities as part of the school routines based on the training's impact on children's academic competences and cognitive skills.

  15. Computational approaches to screen candidate ligands with anti- Parkinson's activity using R programming.

    PubMed

    Jayadeepa, R M; Niveditha, M S

    2012-01-01

    It is estimated that by 2050 over 100 million people will be affected by the Parkinson's disease (PD). We propose various computational approaches to screen suitable candidate ligand with anti-Parkinson's activity from phytochemicals. Five different types of dopamine receptors have been identified in the brain, D1-D5. Dopamine receptor D3 was selected as the target receptor. The D3 receptor exists in areas of the brain outside the basal ganglia, such as the limbic system, and thus may play a role in the cognitive and emotional changes noted in Parkinson's disease. A ligand library of 100 molecules with anti-Parkinson's activity was collected from literature survey. Nature is the best combinatorial chemist and possibly has answers to all diseases of mankind. Failure of some synthetic drugs and its side effects have prompted many researches to go back to ancient healing methods which use herbal medicines to give relief. Hence, the candidate ligands with anti-Parkinson's were selected from herbal sources through literature survey. Lipinski rules were applied to screen the suitable molecules for the study, the resulting 88 molecules were energy minimized, and subjected to docking using Autodock Vina. The top eleven molecules were screened according to the docking score generated by Autodock Vina Commercial drug Ropinirole was computed similarly and was compared with the 11 phytochemicals score, the screened molecules were subjected to toxicity analysis and to verify toxic property of phytochemicals. R Programming was applied to remove the bias from the top eleven molecules. Using cluster analysis and Confusion Matrix two phytochemicals were computationally selected namely Rosmarinic acid and Gingkolide A for further studies on the disease Parkinson's.

  16. Libraries for Software Use on Peregrine | High-Performance Computing | NREL

    Science.gov Websites

    -specific libraries. Libraries List Name Description BLAS Basic Linear Algebra Subroutines, libraries only managing hierarchically structured data. LAPACK Standard Netlib offering for computational linear algebra

  17. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  18. A Monte-Carlo maplet for the study of the optical properties of biological tissues

    NASA Astrophysics Data System (ADS)

    Yip, Man Ho; Carvalho, M. J.

    2007-12-01

    Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.

  19. Prediction of destination entry and retrieval times using keystroke-level models

    DOT National Transportation Integrated Search

    1998-04-01

    Thirty-six drivers entered and retrieved destinations using an Ali-Scout navigation computer. Retrieval involved keying in part of the destination name, scrolling through a list of names, or a combination of those methods. Entry required keying in th...

  20. A Multi-Time Scale Morphable Software Milieu for Polymorphous Computing Architectures (PCA) - Composable, Scalable Systems

    DTIC Science & Technology

    2004-10-01

    MONITORING AGENCY NAME(S) AND ADDRESS(ES) Defense Advanced Research Projects Agency AFRL/IFTC 3701 North Fairfax Drive...Scalable Parallel Libraries for Large-Scale Concurrent Applications," Technical Report UCRL -JC-109251, Lawrence Livermore National Laboratory

  1. Berry phase effect on Majorana braiding

    NASA Astrophysics Data System (ADS)

    He, Yingping; Wang, Baozong; Liu, Xiong-Jun

    Majorana zero modes are predicted to exhibit Non-Abelian braiding, which can be applied to fault-tolerant quantum computation. An essential signature of the non-Abelian braiding is that after a full braiding each of the two Majorana modes under braiding gets a minus sign, namely, a π Berry phase. In this work we find a novel effect in Majorana braiding that during the adiabatic transport a Majorana mode may or may not acquire a staggered minus sign under each step that the Majorana is transported, corresponding to two different types of parameter manipulation. This additional minus sign is shown to be a consequence of translational Berry phase effect, which can qualitatively affect the braiding of Majorana modes. Furthermore, we also study the effect of vortices on the Majorana braiding, with the similar additional Berry phase effect being obtained. Our work may provide new understanding of the non-Abelian statistics of Majorana modes and help improve the experiment setup for quantum computation. MOST, NSFC, Thousand-Young-Talent Program of China.

  2. [Registration technology for mandibular angle osteotomy based on augmented reality].

    PubMed

    Zhu, Ming; Chai, Gang; Zhang, Yan; Ma, Xiao-Fei; Yu, Zhe-Yuan; Zhu, Yi-Jia

    2010-12-01

    To establish an effective path to register the operative plan to the real model of mandible made by rapid prototyping (RP) technology. Computerize tomography (CT) was performed on 20 patients to create 3D images, and computer aided operation planning information can be merged with the 3D images. Then dental cast was used to fix the signal which can be recognized by the software. The dental cast was transformed to 3D data with a laser scanner and a programmer that run on a personal computer named Rapidform matching the dental cast and the mandible image to generate the virtual image. Then the registration was achieved by video monitoring system. By using this technology, the virtual image of mandible and the cutting planes both can overlay the real model of mandible made by RP. This study found an effective way for registration by using dental cast, and this way might be a powerful option for the registration of augmented reality. Supported by Program for Innovation Research Team of Shanghai Municipal Education Commission.

  3. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerfler, Douglas; Austin, Brian; Cook, Brandon

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL,more » such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.« less

  4. Molecular structure input on the web.

    PubMed

    Ertl, Peter

    2010-02-02

    A molecule editor, that is program for input and editing of molecules, is an indispensable part of every cheminformatics or molecular processing system. This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web. Scientific computing is now moving more and more in the direction of web services and cloud computing, with servers scattered all around the Internet. Thus a web browser has become the universal scientific user interface, and a tool to edit molecules directly within the web browser is essential.The review covers a history of web-based structure input, starting with simple text entry boxes and early molecule editors based on clickable maps, before moving to the current situation dominated by Java applets. One typical example - the popular JME Molecule Editor - will be described in more detail. Modern Ajax server-side molecule editors are also presented. And finally, the possible future direction of web-based molecule editing, based on technologies like JavaScript and Flash, is discussed.

  5. ENEL overall PWR plant models and neutronic integrated computing systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedroni, G.; Pollachini, L.; Vimercati, G.

    1987-01-01

    To support the design activity of the Italian nuclear energy program for the construction of pressurized water reactors, the Italian Electricity Board (ENEL) needs to verify the design as a whole (that is, the nuclear steam supply system and balance of plant) both in steady-state operation and in transient. The ENEL has therefore developed two computer models to analyze both operational and incidental transients. The models, named STRIP and SFINCS, perform the analysis of the nuclear as well as the conventional part of the plant (the control system being properly taken into account). The STRIP model has been developed bymore » means of the French (Electricite de France) modular code SICLE, while SFINCS is based on the Italian (ENEL) modular code LEGO. STRIP validation was performed with respect to Fessenheim French power plant experimental data. Two significant transients were chosen: load step and total load rejection. SFINCS validation was performed with respect to Saint-Laurent French power plant experimental data and also by comparing the SFINCS-STRIP responses.« less

  6. libSRES: a C library for stochastic ranking evolution strategy for parameter estimation.

    PubMed

    Ji, Xinglai; Xu, Ying

    2006-01-01

    Estimation of kinetic parameters in a biochemical pathway or network represents a common problem in systems studies of biological processes. We have implemented a C library, named libSRES, to facilitate a fast implementation of computer software for study of non-linear biochemical pathways. This library implements a (mu, lambda)-ES evolutionary optimization algorithm that uses stochastic ranking as the constraint handling technique. Considering the amount of computing time it might require to solve a parameter-estimation problem, an MPI version of libSRES is provided for parallel implementation, as well as a simple user interface. libSRES is freely available and could be used directly in any C program as a library function. We have extensively tested the performance of libSRES on various pathway parameter-estimation problems and found its performance to be satisfactory. The source code (in C) is free for academic users at http://csbl.bmb.uga.edu/~jix/science/libSRES/

  7. A Standardized Reference Data Set for Vertebrate Taxon Name Resolution

    PubMed Central

    Zermoglio, Paula F.; Guralnick, Robert P.; Wieczorek, John R.

    2016-01-01

    Taxonomic names associated with digitized biocollections labels have flooded into repositories such as GBIF, iDigBio and VertNet. The names on these labels are often misspelled, out of date, or present other problems, as they were often captured only once during accessioning of specimens, or have a history of label changes without clear provenance. Before records are reliably usable in research, it is critical that these issues be addressed. However, still missing is an assessment of the scope of the problem, the effort needed to solve it, and a way to improve effectiveness of tools developed to aid the process. We present a carefully human-vetted analysis of 1000 verbatim scientific names taken at random from those published via the data aggregator VertNet, providing the first rigorously reviewed, reference validation data set. In addition to characterizing formatting problems, human vetting focused on detecting misspelling, synonymy, and the incorrect use of Darwin Core. Our results reveal a sobering view of the challenge ahead, as less than 47% of name strings were found to be currently valid. More optimistically, nearly 97% of name combinations could be resolved to a currently valid name, suggesting that computer-aided approaches may provide feasible means to improve digitized content. Finally, we associated names back to biocollections records and fit logistic models to test potential drivers of issues. A set of candidate variables (geographic region, year collected, higher-level clade, and the institutional digitally accessible data volume) and their 2-way interactions all predict the probability of records having taxon name issues, based on model selection approaches. We strongly encourage further experiments to use this reference data set as a means to compare automated or computer-aided taxon name tools for their ability to resolve and improve the existing wealth of legacy data. PMID:26760296

  8. Mississippi Curriculum Framework for Computer Information Systems Technology. Computer Information Systems Technology (Program CIP: 52.1201--Management Information Systems & Business Data). Computer Programming (Program CIP: 52.1201). Network Support (Program CIP: 52.1290--Computer Network Support Technology). Postsecondary Programs.

    ERIC Educational Resources Information Center

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…

  9. Keeping an Eye on the Prize

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazi, A U

    2007-02-06

    Setting performance goals is part of the business plan for almost every company. The same is true in the world of supercomputers. Ten years ago, the Department of Energy (DOE) launched the Accelerated Strategic Computing Initiative (ASCI) to help ensure the safety and reliability of the nation's nuclear weapons stockpile without nuclear testing. ASCI, which is now called the Advanced Simulation and Computing (ASC) Program and is managed by DOE's National Nuclear Security Administration (NNSA), set an initial 10-year goal to obtain computers that could process up to 100 trillion floating-point operations per second (teraflops). Many computer experts thought themore » goal was overly ambitious, but the program's results have proved them wrong. Last November, a Livermore-IBM team received the 2005 Gordon Bell Prize for achieving more than 100 teraflops while modeling the pressure-induced solidification of molten metal. The prestigious prize, which is named for a founding father of supercomputing, is awarded each year at the Supercomputing Conference to innovators who advance high-performance computing. Recipients for the 2005 prize included six Livermore scientists--physicists Fred Streitz, James Glosli, and Mehul Patel and computer scientists Bor Chan, Robert Yates, and Bronis de Supinski--as well as IBM researchers James Sexton and John Gunnels. This team produced the first atomic-scale model of metal solidification from the liquid phase with results that were independent of system size. The record-setting calculation used Livermore's domain decomposition molecular-dynamics (ddcMD) code running on BlueGene/L, a supercomputer developed by IBM in partnership with the ASC Program. BlueGene/L reached 280.6 teraflops on the Linpack benchmark, the industry standard used to measure computing speed. As a result, it ranks first on the list of Top500 Supercomputer Sites released in November 2005. To evaluate the performance of nuclear weapons systems, scientists must understand how materials behave under extreme conditions. Because experiments at high pressures and temperatures are often difficult or impossible to conduct, scientists rely on computer models that have been validated with obtainable data. Of particular interest to weapons scientists is the solidification of metals. ''To predict the performance of aging nuclear weapons, we need detailed information on a material's phase transitions'', says Streitz, who leads the Livermore-IBM team. For example, scientists want to know what happens to a metal as it changes from molten liquid to a solid and how that transition affects the material's characteristics, such as its strength.« less

  10. ChemEngine: harvesting 3D chemical structures of supplementary data from PDF files.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2016-01-01

    Digital access to chemical journals resulted in a vast array of molecular information that is now available in the supplementary material files in PDF format. However, extracting this molecular information, generally from a PDF document format is a daunting task. Here we present an approach to harvest 3D molecular data from the supporting information of scientific research articles that are normally available from publisher's resources. In order to demonstrate the feasibility of extracting truly computable molecules from PDF file formats in a fast and efficient manner, we have developed a Java based application, namely ChemEngine. This program recognizes textual patterns from the supplementary data and generates standard molecular structure data (bond matrix, atomic coordinates) that can be subjected to a multitude of computational processes automatically. The methodology has been demonstrated via several case studies on different formats of coordinates data stored in supplementary information files, wherein ChemEngine selectively harvested the atomic coordinates and interpreted them as molecules with high accuracy. The reusability of extracted molecular coordinate data was demonstrated by computing Single Point Energies that were in close agreement with the original computed data provided with the articles. It is envisaged that the methodology will enable large scale conversion of molecular information from supplementary files available in the PDF format into a collection of ready- to- compute molecular data to create an automated workflow for advanced computational processes. Software along with source codes and instructions available at https://sourceforge.net/projects/chemengine/files/?source=navbar.Graphical abstract.

  11. Software for Automated Reading of STEP Files by I-DEAS(trademark)

    NASA Technical Reports Server (NTRS)

    Pinedo, John

    2003-01-01

    A program called "readstep" enables the I-DEAS(tm) computer-aided-design (CAD) software to automatically read Standard for the Exchange of Product Model Data (STEP) files. (The STEP format is one of several used to transfer data between dissimilar CAD programs.) Prior to the development of "readstep," it was necessary to read STEP files into I-DEAS(tm) one at a time in a slow process that required repeated intervention by the user. In operation, "readstep" prompts the user for the location of the desired STEP files and the names of the I-DEAS(tm) project and model file, then generates an I-DEAS(tm) program file called "readstep.prg" and two Unix shell programs called "runner" and "controller." The program "runner" runs I-DEAS(tm) sessions that execute readstep.prg, while "controller" controls the execution of "runner" and edits readstep.prg if necessary. The user sets "runner" and "controller" into execution simultaneously, and then no further intervention by the user is required. When "runner" has finished, the user should see only parts from successfully read STEP files present in the model file. STEP files that could not be read successfully (e.g., because of format errors) should be regenerated before attempting to read them again.

  12. An Arbitrary First Order Theory Can Be Represented by a Program: A Theorem

    NASA Technical Reports Server (NTRS)

    Hosheleva, Olga

    1997-01-01

    How can we represent knowledge inside a computer? For formalized knowledge, classical logic seems to be the most adequate tool. Classical logic is behind all formalisms of classical mathematics, and behind many formalisms used in Artificial Intelligence. There is only one serious problem with classical logic: due to the famous Godel's theorem, classical logic is algorithmically undecidable; as a result, when the knowledge is represented in the form of logical statements, it is very difficult to check whether, based on this statement, a given query is true or not. To make knowledge representations more algorithmic, a special field of logic programming was invented. An important portion of logic programming is algorithmically decidable. To cover knowledge that cannot be represented in this portion, several extensions of the decidable fragments have been proposed. In the spirit of logic programming, these extensions are usually introduced in such a way that even if a general algorithm is not available, good heuristic methods exist. It is important to check whether the already proposed extensions are sufficient, or further extensions is necessary. In the present paper, we show that one particular extension, namely, logic programming with classical negation, introduced by M. Gelfond and V. Lifschitz, can represent (in some reasonable sense) an arbitrary first order logical theory.

  13. Substance Identification Information from EPA's Substance Registry

    EPA Pesticide Factsheets

    The Substance Registry Services (SRS) is the authoritative resource for basic information about substances of interest to the U.S. EPA and its state and tribal partners. Substances, particularly chemicals, can have many valid synonyms. For example, toluene, methyl benzene, and phenyl methane, are commonly used names for the same chemical. EPA programs collect environmental data for this chemical using each of these names, plus others. This diversity leads to problems when a user is looking for programmatic data for toluene but is unaware that the data is stored under the synonym methyl benzene. For each substance, the SRS identifies the statutes, EPA programs, as well as organization external to EPA, that track or regulate that substance and the synonym used by that statute, EPA program or external organization. Besides standardized information for each chemical, such as the Chemical Abstracts Services name and the Chemical Abstracts Number and the EPA Registry Name (the EPA standard name), the SRS also includes additional information, such as molecular weight and molecular formula. Additionally, an SRS Internal Tracking Number uniquely identifies each substance, enabling cross-walking between synonyms. EPA is providing a large .ZIP file providing the SRS data in CSV format, and a separate small metadata file in XML containing the field names and definitions.

  14. The mathematical theory of signal processing and compression-designs

    NASA Astrophysics Data System (ADS)

    Feria, Erlan H.

    2006-05-01

    The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.

  15. Eric Stahlberg Named to FCW’s Federal 100 | FNLCR Staging

    Cancer.gov

    Eric Stahlberg, Ph.D., director of high-performance computing at the Frederick National Lab, has been named one of FCW‘s Federal 100 for his work in predictive oncology and his role in the collaboration between the National Cancer Institute and the

  16. An Assessment of Fiscal Year 2013 Beyond Yellow Ribbon Programs

    DTIC Science & Technology

    2015-01-01

    2013 Beyond Yellow Ribbon Programs 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER...5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) RAND Corporation,National Defense Research Institute,1776 Main Street, P.O...Box 2138,Santa Monica,CA,90407-2138 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR

  17. Navy CG(X) Cruiser Program: Background, Oversight Issues, and Options for Congress

    DTIC Science & Technology

    2008-10-27

    ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e . TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Congressional...report (Katherine McIntire Peters, “Navy’s Top Officer Sees Lessons in Shipbuilding Program Failures,” GovernmentExecutive.com, September 24, 2008) quoted...40 A related question is whether the schedule for procuring CG(X)s is properly aligned with foreign-country ballistic missile development programs. A

  18. Navy CG(X) Cruiser Program: Background for Congress

    DTIC Science & Technology

    2010-09-28

    NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e . TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS...Peters, “Navy’s Top Officer Sees Lessons in Shipbuilding Program Failures,” GovernmentExecutive.com, September 24, 2008) quoted Admiral Gary Roughead...number of CG(X)s could have changed.38 A related question was whether the schedule for procuring CG(X)s was properly aligned with foreign-country

  19. Navy CG(X) Cruiser Program: Background for Congress

    DTIC Science & Technology

    2010-02-26

    5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e . TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS...Peters, “Navy’s Top Officer Sees Lessons in Shipbuilding Program Failures,” GovernmentExecutive.com, September 24, 2008) quoted Admiral Gary Roughead...CG(X)s could change.40 A related question was whether the schedule for procuring CG(X)s was properly aligned with foreign-country ballistic missile

  20. Toward Affordable Systems: Portfolio Analysis and Management for Army Science and Technology Programs

    DTIC Science & Technology

    2009-01-01

    PROGRAM ELEMENT NUMBER 6. AUTHOR (S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Rand...Decision Authority MDAP major defense acquisition program Abbreviations xxvii MIC marginal implementation cost MOMC marginal operating and...the Milestone Decision Authority (MDA).1 1 This section is based on U.S. Army, 2003, pp. 30–33. This 2003 Army document had taken into account the

Top